Test Report: KVM_Linux_containerd 20512

                    
                      48b5bd1b410deb6f0834786c8abc7687a18ec8ba:2025-04-14:39137
                    
                

Test fail (15/326)

x
+
TestMultiControlPlane/serial/StartCluster (75.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-290859 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-290859 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 80 (1m13.948979388s)

                                                
                                                
-- stdout --
	* [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.168.39.110
	* Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	  - env NO_PROXY=192.168.39.110
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:0xc00012c550 https:0xc00012c5a0] Dir:false
ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	* 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 start -p ha-290859 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.175005073s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-905978 ssh findmnt                                            | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-905978                                                     | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port1389122606/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-905978 ssh findmnt                                            | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-905978 ssh -- ls                                              | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-905978 ssh sudo                                               | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-905978                                                     | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount1    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-905978                                                     | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount3    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-905978 ssh findmnt                                            | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-905978                                                     | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount2    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-905978 ssh findmnt                                            | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-905978 ssh findmnt                                            | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-905978 ssh findmnt                                            | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-905978                                                     | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| image          | functional-905978                                                        | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-905978                                                        | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-905978 ssh pgrep                                              | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-905978 image build -t                                         | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | localhost/my-image:functional-905978                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-905978 image ls                                               | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	| image          | functional-905978                                                        | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-905978                                                        | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| update-context | functional-905978                                                        | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-905978                                                        | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-905978                                                        | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| delete         | -p functional-905978                                                     | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	| start          | -p ha-290859 --wait=true                                                 | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|                | --memory=2200 --ha                                                       |                   |         |         |                     |                     |
	|                | -v=7 --alsologtostderr                                                   |                   |         |         |                     |                     |
	|                | --driver=kvm2                                                            |                   |         |         |                     |                     |
	|                | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	731a9f2fe8645       c69fa2e9cbf5f       14 seconds ago      Running             coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       14 seconds ago      Running             coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	922f97d06563e       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   4de376d34ee7f       storage-provisioner
	2df8ccb8d6ed9       df3849d954c98       26 seconds ago      Running             kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       29 seconds ago      Running             kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	9914f8879fc43       6ff023a402a69       37 seconds ago      Running             kube-vip                  0                   7b4e857fc4a72       kube-vip-ha-290859
	8263b35014337       b6a454c5a800d       40 seconds ago      Running             kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       40 seconds ago      Running             kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       40 seconds ago      Running             etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       40 seconds ago      Running             kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.944257172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.944335026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.991327229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.991399429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.991414789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:29:44 ha-290859 containerd[643]: time="2025-04-14T14:29:44.991553876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.006971699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.007117025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.007134486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.015183713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.035724100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:a98bb55f-5a73-4436-82eb-ae7534928039,Namespace:kube-system,Attempt:0,} returns sandbox id \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.049022093Z" level=info msg="CreateContainer within sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.082712088Z" level=info msg="CreateContainer within sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.083397395Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.120635029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbn4p,Uid:5c2a6c8d-60f5-466d-8f59-f43a26cf06c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2818c413e6e32cda88d124ae36bfe42091bf5832b899e50c953444aea7c8118e\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.125584339Z" level=info msg="CreateContainer within sandbox \"2818c413e6e32cda88d124ae36bfe42091bf5832b899e50c953444aea7c8118e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.165622128Z" level=info msg="CreateContainer within sandbox \"2818c413e6e32cda88d124ae36bfe42091bf5832b899e50c953444aea7c8118e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:29:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:29:56 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:29:56 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:29:56 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:29:56 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    357ae105-a7f9-47b1-bf31-1c1aadedfe92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     30s
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     30s
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         34s
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      30s
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal  NodeReady                15s   kubelet          Node ha-290859 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr14 14:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.058894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
	[  +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.061817] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.054755] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.092655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133260] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.934693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-14T14:29:20.934727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 1"}
	{"level":"info","ts":"2025-04-14T14:29:20.934744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became candidate at term 2"}
	{"level":"info","ts":"2025-04-14T14:29:20.934754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgVoteResp from fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2025-04-14T14:29:20.934880Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 2"}
	{"level":"info","ts":"2025-04-14T14:29:20.934897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2025-04-14T14:29:20.938840Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:ha-290859 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:29:20.938875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.939017Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.939433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.940639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940850Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:29:59 up 1 min,  0 users,  load average: 0.20, 0.08, 0.03
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:29:33.700839       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0414 14:29:33.701358       1 main.go:139] hostIP = 192.168.39.110
	podIP = 192.168.39.110
	I0414 14:29:33.793646       1 main.go:148] setting mtu 1500 for CNI 
	I0414 14:29:33.793783       1 main.go:178] kindnetd IP family: "ipv4"
	I0414 14:29:33.793875       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0414 14:29:34.500111       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	add table inet kindnet-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I0414 14:29:44.503197       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:29:44.503441       1 main.go:301] handling current node
	I0414 14:29:54.509621       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:29:54.509758       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.336292       1 policy_source.go:240] refreshing policies
	E0414 14:29:22.338963       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0414 14:29:22.361649       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 14:29:22.361941       1 shared_informer.go:320] Caches are synced for configmaps
	I0414 14:29:22.362262       1 aggregator.go:171] initial CRD sync complete...
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:29:28.843253       1 shared_informer.go:320] Caches are synced for deployment
	I0414 14:29:28.844034       1 shared_informer.go:320] Caches are synced for persistent volume
	I0414 14:29:28.844299       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0414 14:29:28.848906       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0414 14:29:28.849212       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:29:28.849296       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 14:29:28.849401       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 14:29:28.849617       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 14:29:28.850996       1 shared_informer.go:320] Caches are synced for stateful set
	I0414 14:29:29.000358       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:29:29.886420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="120.420823ms"
	I0414 14:29:29.906585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.109075ms"
	I0414 14:29:29.906712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="88.01µs"
	I0414 14:29:44.519476       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:29:44.534945       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:29:44.547691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.626341ms"
	I0414 14:29:44.559315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="67.802µs"
	I0414 14:29:44.571127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="74.78µs"
	I0414 14:29:44.594711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="70.198µs"
	I0414 14:29:45.825051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.769469ms"
	I0414 14:29:45.826885       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="164.591µs"
	I0414 14:29:45.846118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.808387ms"
	I0414 14:29:45.849026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.566µs"
	I0414 14:29:48.846765       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0414 14:29:56.189929       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:29:26 ha-290859 kubelet[1300]: I0414 14:29:26.859439    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ha-290859" podStartSLOduration=1.859425056 podStartE2EDuration="1.859425056s" podCreationTimestamp="2025-04-14 14:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:26.837835811 +0000 UTC m=+1.278826064" watchObservedRunningTime="2025-04-14 14:29:26.859425056 +0000 UTC m=+1.300415308"
	Apr 14 14:29:26 ha-290859 kubelet[1300]: I0414 14:29:26.859604    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ha-290859" podStartSLOduration=1.859595615 podStartE2EDuration="1.859595615s" podCreationTimestamp="2025-04-14 14:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:26.859205988 +0000 UTC m=+1.300196243" watchObservedRunningTime="2025-04-14 14:29:26.859595615 +0000 UTC m=+1.300585870"
	Apr 14 14:29:28 ha-290859 kubelet[1300]: I0414 14:29:28.789189    1300 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 14 14:29:28 ha-290859 kubelet[1300]: I0414 14:29:28.790117    1300 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800169    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rxv\" (UniqueName: \"kubernetes.io/projected/b3479bb3-d98e-42a9-bf3a-a6d20c52de81-kube-api-access-z2rxv\") pod \"kindnet-hm99t\" (UID: \"b3479bb3-d98e-42a9-bf3a-a6d20c52de81\") " pod="kube-system/kindnet-hm99t"
	Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800223    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bd6869b-0b23-4901-b9fa-02d62196a4f0-lib-modules\") pod \"kube-proxy-cg945\" (UID: \"4bd6869b-0b23-4901-b9fa-02d62196a4f0\") " pod="kube-system/kube-proxy-cg945"
	Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800244    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b3479bb3-d98e-42a9-bf3a-a6d20c52de81-cni-cfg\") pod \"kindnet-hm99t\" (UID: \"b3479bb3-d98e-42a9-bf3a-a6d20c52de81\") " pod="kube-system/kindnet-hm99t"
	Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800258    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3479bb3-d98e-42a9-bf3a-a6d20c52de81-lib-modules\") pod \"kindnet-hm99t\" (UID: \"b3479bb3-d98e-42a9-bf3a-a6d20c52de81\") " pod="kube-system/kindnet-hm99t"
	Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800273    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldcxv\" (UniqueName: \"kubernetes.io/projected/4bd6869b-0b23-4901-b9fa-02d62196a4f0-kube-api-access-ldcxv\") pod \"kube-proxy-cg945\" (UID: \"4bd6869b-0b23-4901-b9fa-02d62196a4f0\") " pod="kube-system/kube-proxy-cg945"
	Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800291    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4bd6869b-0b23-4901-b9fa-02d62196a4f0-kube-proxy\") pod \"kube-proxy-cg945\" (UID: \"4bd6869b-0b23-4901-b9fa-02d62196a4f0\") " pod="kube-system/kube-proxy-cg945"
	Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800318    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bd6869b-0b23-4901-b9fa-02d62196a4f0-xtables-lock\") pod \"kube-proxy-cg945\" (UID: \"4bd6869b-0b23-4901-b9fa-02d62196a4f0\") " pod="kube-system/kube-proxy-cg945"
	Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.800334    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3479bb3-d98e-42a9-bf3a-a6d20c52de81-xtables-lock\") pod \"kindnet-hm99t\" (UID: \"b3479bb3-d98e-42a9-bf3a-a6d20c52de81\") " pod="kube-system/kindnet-hm99t"
	Apr 14 14:29:29 ha-290859 kubelet[1300]: I0414 14:29:29.927080    1300 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Apr 14 14:29:30 ha-290859 kubelet[1300]: I0414 14:29:30.759848    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cg945" podStartSLOduration=1.759822313 podStartE2EDuration="1.759822313s" podCreationTimestamp="2025-04-14 14:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:30.758021458 +0000 UTC m=+5.199011713" watchObservedRunningTime="2025-04-14 14:29:30.759822313 +0000 UTC m=+5.200812548"
	Apr 14 14:29:38 ha-290859 kubelet[1300]: I0414 14:29:38.319236    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hm99t" podStartSLOduration=6.431741101 podStartE2EDuration="9.319165475s" podCreationTimestamp="2025-04-14 14:29:29 +0000 UTC" firstStartedPulling="2025-04-14 14:29:30.4398268 +0000 UTC m=+4.880817048" lastFinishedPulling="2025-04-14 14:29:33.327251182 +0000 UTC m=+7.768241422" observedRunningTime="2025-04-14 14:29:33.777221168 +0000 UTC m=+8.218211403" watchObservedRunningTime="2025-04-14 14:29:38.319165475 +0000 UTC m=+12.760155728"
	Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.505879    1300 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.603696    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a590080d-c4b1-4697-9849-ae6130e483a3-config-volume\") pod \"coredns-668d6bf9bc-qnl6q\" (UID: \"a590080d-c4b1-4697-9849-ae6130e483a3\") " pod="kube-system/coredns-668d6bf9bc-qnl6q"
	Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.603889    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lng\" (UniqueName: \"kubernetes.io/projected/a590080d-c4b1-4697-9849-ae6130e483a3-kube-api-access-k9lng\") pod \"coredns-668d6bf9bc-qnl6q\" (UID: \"a590080d-c4b1-4697-9849-ae6130e483a3\") " pod="kube-system/coredns-668d6bf9bc-qnl6q"
	Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.604007    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sggjh\" (UniqueName: \"kubernetes.io/projected/5c2a6c8d-60f5-466d-8f59-f43a26cf06c4-kube-api-access-sggjh\") pod \"coredns-668d6bf9bc-wbn4p\" (UID: \"5c2a6c8d-60f5-466d-8f59-f43a26cf06c4\") " pod="kube-system/coredns-668d6bf9bc-wbn4p"
	Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.604073    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c2a6c8d-60f5-466d-8f59-f43a26cf06c4-config-volume\") pod \"coredns-668d6bf9bc-wbn4p\" (UID: \"5c2a6c8d-60f5-466d-8f59-f43a26cf06c4\") " pod="kube-system/coredns-668d6bf9bc-wbn4p"
	Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.604118    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a98bb55f-5a73-4436-82eb-ae7534928039-tmp\") pod \"storage-provisioner\" (UID: \"a98bb55f-5a73-4436-82eb-ae7534928039\") " pod="kube-system/storage-provisioner"
	Apr 14 14:29:44 ha-290859 kubelet[1300]: I0414 14:29:44.604163    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnm4d\" (UniqueName: \"kubernetes.io/projected/a98bb55f-5a73-4436-82eb-ae7534928039-kube-api-access-xnm4d\") pod \"storage-provisioner\" (UID: \"a98bb55f-5a73-4436-82eb-ae7534928039\") " pod="kube-system/storage-provisioner"
	Apr 14 14:29:45 ha-290859 kubelet[1300]: I0414 14:29:45.804448    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.804430214 podStartE2EDuration="15.804430214s" podCreationTimestamp="2025-04-14 14:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:45.792326929 +0000 UTC m=+20.233317179" watchObservedRunningTime="2025-04-14 14:29:45.804430214 +0000 UTC m=+20.245420469"
	Apr 14 14:29:45 ha-290859 kubelet[1300]: I0414 14:29:45.830229    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wbn4p" podStartSLOduration=16.830170415 podStartE2EDuration="16.830170415s" podCreationTimestamp="2025-04-14 14:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:45.80569588 +0000 UTC m=+20.246686135" watchObservedRunningTime="2025-04-14 14:29:45.830170415 +0000 UTC m=+20.271160663"
	Apr 14 14:29:45 ha-290859 kubelet[1300]: I0414 14:29:45.830711    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qnl6q" podStartSLOduration=16.830651166 podStartE2EDuration="16.830651166s" podCreationTimestamp="2025-04-14 14:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-14 14:29:45.828813483 +0000 UTC m=+20.269803765" watchObservedRunningTime="2025-04-14 14:29:45.830651166 +0000 UTC m=+20.271641420"
	
	
	==> storage-provisioner [922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b] <==
	I0414 14:29:45.362622       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 14:29:45.429344       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 14:29:45.429932       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 14:29:45.442302       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 14:29:45.443637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cd1340a-7958-40a2-8c68-004b8c8385a8", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b became leader
	I0414 14:29:45.444610       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b!
	I0414 14:29:45.546579       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (75.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (717.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- rollout status deployment/busybox
E0414 14:30:58.683228 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:31:26.383467 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:32:59.574948 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:32:59.581367 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:32:59.592712 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:32:59.614137 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:32:59.655621 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:32:59.737170 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:32:59.898787 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:33:00.220501 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:33:00.862637 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:33:02.144342 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:33:04.707306 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:33:09.829172 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:33:20.071008 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:33:40.552612 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:34:21.515517 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:35:43.440791 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:35:58.686778 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:37:59.575775 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:27.282402 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-290859 -- rollout status deployment/busybox: exit status 1 (10m3.253050553s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0414 14:40:04.183960 1203639 retry.go:31] will retry after 1.090231022s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0414 14:40:05.385738 1203639 retry.go:31] will retry after 1.660153884s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0414 14:40:07.156363 1203639 retry.go:31] will retry after 1.53701521s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0414 14:40:08.807893 1203639 retry.go:31] will retry after 3.621329908s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0414 14:40:12.546808 1203639 retry.go:31] will retry after 7.286439274s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0414 14:40:19.949428 1203639 retry.go:31] will retry after 7.700409303s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0414 14:40:27.771522 1203639 retry.go:31] will retry after 6.840244338s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0414 14:40:34.736270 1203639 retry.go:31] will retry after 17.893116228s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0414 14:40:52.747347 1203639 retry.go:31] will retry after 16.157271181s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
E0414 14:40:58.684434 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I0414 14:41:09.021747 1203639 retry.go:31] will retry after 44.358199091s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-8bg2x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-8bg2x -- nslookup kubernetes.io: exit status 1 (123.629369ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-58667487b6-8bg2x does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-58667487b6-8bg2x could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-q9jvx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-q9jvx -- nslookup kubernetes.io: exit status 1 (124.639394ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-58667487b6-q9jvx does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-58667487b6-q9jvx could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-t6bgg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-t6bgg -- nslookup kubernetes.io: (1.220104905s)
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-8bg2x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-8bg2x -- nslookup kubernetes.default: exit status 1 (121.579092ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-58667487b6-8bg2x does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-58667487b6-8bg2x could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-q9jvx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-q9jvx -- nslookup kubernetes.default: exit status 1 (124.952183ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-58667487b6-q9jvx does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-58667487b6-q9jvx could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-t6bgg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-8bg2x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-8bg2x -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (124.294269ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-58667487b6-8bg2x does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-58667487b6-8bg2x could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-q9jvx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-q9jvx -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (125.232841ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-58667487b6-q9jvx does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-58667487b6-q9jvx could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-t6bgg -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.14766157s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| delete  | -p functional-905978                 | functional-905978 | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC | 14 Apr 25 14:28 UTC |
	| start   | -p ha-290859 --wait=true             | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:28 UTC |                     |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=kvm2                        |                   |         |         |                     |                     |
	|         | --container-runtime=containerd       |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- apply -f             | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:30 UTC | 14 Apr 25 14:30 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- rollout status       | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:30 UTC |                     |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859         | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24e6d7cfe7ea4       8c811b4aec35f       11 minutes ago      Running             busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       12 minutes ago      Running             coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       12 minutes ago      Running             coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	922f97d06563e       6e38f40d628db       12 minutes ago      Running             storage-provisioner       0                   4de376d34ee7f       storage-provisioner
	2df8ccb8d6ed9       df3849d954c98       12 minutes ago      Running             kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       12 minutes ago      Running             kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	9914f8879fc43       6ff023a402a69       12 minutes ago      Running             kube-vip                  0                   7b4e857fc4a72       kube-vip-ha-290859
	8263b35014337       b6a454c5a800d       12 minutes ago      Running             kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       12 minutes ago      Running             kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       12 minutes ago      Running             etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       12 minutes ago      Running             kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.131928214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,}"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218617705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218691310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218706805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218958691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.281907696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,} returns sandbox id \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\""
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.284050999Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.401970091Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.404464641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.406415797Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.409920833Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411266903Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.127171694s"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411378057Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.414728181Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.437197602Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.439640223Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.489937462Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:41:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:37:12 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:37:12 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:37:12 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:37:12 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    357ae105-a7f9-47b1-bf31-1c1aadedfe92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m   node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal  NodeReady                12m   kubelet          Node ha-290859 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.058894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
	[  +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.061817] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.054755] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.092655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133260] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
	[Apr14 14:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.934880Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 2"}
	{"level":"info","ts":"2025-04-14T14:29:20.934897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2025-04-14T14:29:20.938840Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:ha-290859 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:29:20.938875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.939017Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.939433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.940639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940850Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	
	
	==> kernel <==
	 14:41:57 up 13 min,  0 users,  load average: 0.26, 0.20, 0.11
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:39:54.507048       1 main.go:301] handling current node
	I0414 14:40:04.508951       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:04.508995       1 main.go:301] handling current node
	I0414 14:40:14.500379       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:14.500645       1 main.go:301] handling current node
	I0414 14:40:24.506288       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:24.506555       1 main.go:301] handling current node
	I0414 14:40:34.500952       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:34.501060       1 main.go:301] handling current node
	I0414 14:40:44.501586       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:44.501707       1 main.go:301] handling current node
	I0414 14:40:54.508592       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:54.508634       1 main.go:301] handling current node
	I0414 14:41:04.502440       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:04.502662       1 main.go:301] handling current node
	I0414 14:41:14.504432       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:14.504715       1 main.go:301] handling current node
	I0414 14:41:24.505571       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:24.505635       1 main.go:301] handling current node
	I0414 14:41:34.500339       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:34.500416       1 main.go:301] handling current node
	I0414 14:41:44.500407       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:44.500557       1 main.go:301] handling current node
	I0414 14:41:54.509039       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:54.509064       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.361941       1 shared_informer.go:320] Caches are synced for configmaps
	I0414 14:29:22.362262       1 aggregator.go:171] initial CRD sync complete...
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:29:28.849617       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 14:29:28.850996       1 shared_informer.go:320] Caches are synced for stateful set
	I0414 14:29:29.000358       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:29:29.886420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="120.420823ms"
	I0414 14:29:29.906585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.109075ms"
	I0414 14:29:29.906712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="88.01µs"
	I0414 14:29:44.519476       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:29:44.534945       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:29:44.547691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.626341ms"
	I0414 14:29:44.559315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="67.802µs"
	I0414 14:29:44.571127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="74.78µs"
	I0414 14:29:44.594711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="70.198µs"
	I0414 14:29:45.825051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.769469ms"
	I0414 14:29:45.826885       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="164.591µs"
	I0414 14:29:45.846118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.808387ms"
	I0414 14:29:45.849026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.566µs"
	I0414 14:29:48.846765       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0414 14:29:56.189929       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:30:00.864893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="63.092508ms"
	I0414 14:30:00.876770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="11.795122ms"
	I0414 14:30:00.876844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="37.849µs"
	I0414 14:30:03.843786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="5.465875ms"
	I0414 14:30:03.844627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="57.422µs"
	I0414 14:30:26.371478       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:37:12.908997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:37:25 ha-290859 kubelet[1300]: E0414 14:37:25.693525    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:37:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:37:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:37:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:37:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:38:25 ha-290859 kubelet[1300]: E0414 14:38:25.691874    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:39:25 ha-290859 kubelet[1300]: E0414 14:39:25.692811    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:40:25 ha-290859 kubelet[1300]: E0414 14:40:25.693003    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:41:25 ha-290859 kubelet[1300]: E0414 14:41:25.692589    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b] <==
	I0414 14:29:45.362622       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 14:29:45.429344       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 14:29:45.429932       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 14:29:45.442302       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 14:29:45.443637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cd1340a-7958-40a2-8c68-004b8c8385a8", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b became leader
	I0414 14:29:45.444610       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b!
	I0414 14:29:45.546579       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-8bg2x busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-8bg2x busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-8bg2x busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-8bg2x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bh9gx (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bh9gx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  91s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  91s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (717.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-8bg2x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-8bg2x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (124.311001ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-58667487b6-8bg2x does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-58667487b6-8bg2x could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-q9jvx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-q9jvx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (121.423175ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-58667487b6-q9jvx does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-58667487b6-q9jvx could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-t6bgg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-290859 -- exec busybox-58667487b6-t6bgg -- sh -c "ping -c 1 192.168.39.1"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.187211959s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24e6d7cfe7ea4       8c811b4aec35f       11 minutes ago      Running             busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       12 minutes ago      Running             coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       12 minutes ago      Running             coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	922f97d06563e       6e38f40d628db       12 minutes ago      Running             storage-provisioner       0                   4de376d34ee7f       storage-provisioner
	2df8ccb8d6ed9       df3849d954c98       12 minutes ago      Running             kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       12 minutes ago      Running             kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	9914f8879fc43       6ff023a402a69       12 minutes ago      Running             kube-vip                  0                   7b4e857fc4a72       kube-vip-ha-290859
	8263b35014337       b6a454c5a800d       12 minutes ago      Running             kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       12 minutes ago      Running             kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       12 minutes ago      Running             etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       12 minutes ago      Running             kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.131928214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,}"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218617705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218691310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218706805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218958691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.281907696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,} returns sandbox id \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\""
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.284050999Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.401970091Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.404464641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.406415797Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.409920833Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411266903Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.127171694s"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411378057Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.414728181Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.437197602Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.439640223Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.489937462Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:41:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:37:12 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:37:12 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:37:12 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:37:12 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    357ae105-a7f9-47b1-bf31-1c1aadedfe92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m   node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal  NodeReady                12m   kubelet          Node ha-290859 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.058894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
	[  +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.061817] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.054755] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.092655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133260] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
	[Apr14 14:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.934880Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 2"}
	{"level":"info","ts":"2025-04-14T14:29:20.934897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2025-04-14T14:29:20.938840Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:ha-290859 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:29:20.938875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.939017Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.939433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.940639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940850Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	
	
	==> kernel <==
	 14:41:59 up 13 min,  0 users,  load average: 0.40, 0.23, 0.12
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:39:54.507048       1 main.go:301] handling current node
	I0414 14:40:04.508951       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:04.508995       1 main.go:301] handling current node
	I0414 14:40:14.500379       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:14.500645       1 main.go:301] handling current node
	I0414 14:40:24.506288       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:24.506555       1 main.go:301] handling current node
	I0414 14:40:34.500952       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:34.501060       1 main.go:301] handling current node
	I0414 14:40:44.501586       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:44.501707       1 main.go:301] handling current node
	I0414 14:40:54.508592       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:40:54.508634       1 main.go:301] handling current node
	I0414 14:41:04.502440       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:04.502662       1 main.go:301] handling current node
	I0414 14:41:14.504432       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:14.504715       1 main.go:301] handling current node
	I0414 14:41:24.505571       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:24.505635       1 main.go:301] handling current node
	I0414 14:41:34.500339       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:34.500416       1 main.go:301] handling current node
	I0414 14:41:44.500407       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:44.500557       1 main.go:301] handling current node
	I0414 14:41:54.509039       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:54.509064       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:29:28.849617       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 14:29:28.850996       1 shared_informer.go:320] Caches are synced for stateful set
	I0414 14:29:29.000358       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:29:29.886420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="120.420823ms"
	I0414 14:29:29.906585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.109075ms"
	I0414 14:29:29.906712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="88.01µs"
	I0414 14:29:44.519476       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:29:44.534945       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:29:44.547691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.626341ms"
	I0414 14:29:44.559315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="67.802µs"
	I0414 14:29:44.571127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="74.78µs"
	I0414 14:29:44.594711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="70.198µs"
	I0414 14:29:45.825051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.769469ms"
	I0414 14:29:45.826885       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="164.591µs"
	I0414 14:29:45.846118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.808387ms"
	I0414 14:29:45.849026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.566µs"
	I0414 14:29:48.846765       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0414 14:29:56.189929       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:30:00.864893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="63.092508ms"
	I0414 14:30:00.876770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="11.795122ms"
	I0414 14:30:00.876844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="37.849µs"
	I0414 14:30:03.843786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="5.465875ms"
	I0414 14:30:03.844627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="57.422µs"
	I0414 14:30:26.371478       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:37:12.908997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:37:25 ha-290859 kubelet[1300]: E0414 14:37:25.693525    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:37:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:37:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:37:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:37:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:38:25 ha-290859 kubelet[1300]: E0414 14:38:25.691874    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:39:25 ha-290859 kubelet[1300]: E0414 14:39:25.692811    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:40:25 ha-290859 kubelet[1300]: E0414 14:40:25.693003    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:41:25 ha-290859 kubelet[1300]: E0414 14:41:25.692589    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b] <==
	I0414 14:29:45.362622       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 14:29:45.429344       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 14:29:45.429932       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 14:29:45.442302       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 14:29:45.443637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cd1340a-7958-40a2-8c68-004b8c8385a8", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b became leader
	I0414 14:29:45.444610       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b!
	I0414 14:29:45.546579       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-290859_00c8818d-bfd0-4e70-bffb-1f8673302f0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-8bg2x busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-8bg2x busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-8bg2x busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-8bg2x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bh9gx (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bh9gx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  94s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  94s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (2.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-290859 -v=7 --alsologtostderr
E0414 14:42:21.746989 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-290859 -v=7 --alsologtostderr: (50.71334562s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (590.568814ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:42:51.170576 1217385 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:42:51.170689 1217385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:42:51.170696 1217385 out.go:358] Setting ErrFile to fd 2...
	I0414 14:42:51.170703 1217385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:42:51.170931 1217385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:42:51.171131 1217385 out.go:352] Setting JSON to false
	I0414 14:42:51.171161 1217385 mustload.go:65] Loading cluster: ha-290859
	I0414 14:42:51.171215 1217385 notify.go:220] Checking for updates...
	I0414 14:42:51.172476 1217385 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:42:51.172537 1217385 status.go:174] checking status of ha-290859 ...
	I0414 14:42:51.173366 1217385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:51.173425 1217385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:51.189571 1217385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37143
	I0414 14:42:51.190093 1217385 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:51.190594 1217385 main.go:141] libmachine: Using API Version  1
	I0414 14:42:51.190616 1217385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:51.191063 1217385 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:51.191324 1217385 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:42:51.193040 1217385 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:42:51.193062 1217385 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:42:51.193384 1217385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:51.193437 1217385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:51.208739 1217385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0414 14:42:51.209248 1217385 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:51.209712 1217385 main.go:141] libmachine: Using API Version  1
	I0414 14:42:51.209732 1217385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:51.210118 1217385 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:51.210266 1217385 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:42:51.213036 1217385 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:42:51.213453 1217385 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:42:51.213493 1217385 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:42:51.213584 1217385 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:42:51.214077 1217385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:51.214125 1217385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:51.229548 1217385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0414 14:42:51.230099 1217385 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:51.230616 1217385 main.go:141] libmachine: Using API Version  1
	I0414 14:42:51.230642 1217385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:51.230987 1217385 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:51.231194 1217385 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:42:51.231415 1217385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:42:51.231438 1217385 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:42:51.234213 1217385 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:42:51.234696 1217385 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:42:51.234731 1217385 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:42:51.234888 1217385 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:42:51.235092 1217385 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:42:51.235266 1217385 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:42:51.235388 1217385 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:42:51.319925 1217385 ssh_runner.go:195] Run: systemctl --version
	I0414 14:42:51.326813 1217385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:42:51.343190 1217385 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:42:51.343242 1217385 api_server.go:166] Checking apiserver status ...
	I0414 14:42:51.343317 1217385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:42:51.360791 1217385 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:42:51.370508 1217385 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:42:51.370565 1217385 ssh_runner.go:195] Run: ls
	I0414 14:42:51.375060 1217385 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:42:51.379671 1217385 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:42:51.379702 1217385 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:42:51.379716 1217385 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:42:51.379737 1217385 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:42:51.380100 1217385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:51.380142 1217385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:51.396164 1217385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37729
	I0414 14:42:51.396738 1217385 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:51.397409 1217385 main.go:141] libmachine: Using API Version  1
	I0414 14:42:51.397434 1217385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:51.397861 1217385 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:51.398115 1217385 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:42:51.399881 1217385 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:42:51.399901 1217385 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:42:51.400216 1217385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:51.400259 1217385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:51.415942 1217385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36263
	I0414 14:42:51.416542 1217385 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:51.417040 1217385 main.go:141] libmachine: Using API Version  1
	I0414 14:42:51.417064 1217385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:51.417437 1217385 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:51.417623 1217385 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:42:51.420446 1217385 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:42:51.420842 1217385 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:42:51.420869 1217385 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:42:51.421070 1217385 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:42:51.421432 1217385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:51.421487 1217385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:51.437232 1217385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0414 14:42:51.437676 1217385 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:51.438177 1217385 main.go:141] libmachine: Using API Version  1
	I0414 14:42:51.438200 1217385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:51.438548 1217385 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:51.438735 1217385 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:42:51.438946 1217385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:42:51.438970 1217385 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:42:51.441715 1217385 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:42:51.442063 1217385 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:42:51.442105 1217385 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:42:51.442219 1217385 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:42:51.442381 1217385 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:42:51.442517 1217385 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:42:51.442742 1217385 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:42:51.527359 1217385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:42:51.543564 1217385 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:42:51.543594 1217385 api_server.go:166] Checking apiserver status ...
	I0414 14:42:51.543638 1217385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:42:51.556911 1217385 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:42:51.556937 1217385 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:42:51.556951 1217385 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:42:51.556980 1217385 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:42:51.557308 1217385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:51.557357 1217385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:51.573452 1217385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0414 14:42:51.574053 1217385 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:51.574561 1217385 main.go:141] libmachine: Using API Version  1
	I0414 14:42:51.574584 1217385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:51.575002 1217385 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:51.575214 1217385 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:42:51.576935 1217385 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:42:51.576952 1217385 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:42:51.577240 1217385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:51.577284 1217385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:51.593188 1217385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0414 14:42:51.593702 1217385 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:51.594178 1217385 main.go:141] libmachine: Using API Version  1
	I0414 14:42:51.594201 1217385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:51.594597 1217385 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:51.594787 1217385 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:42:51.597484 1217385 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:42:51.597924 1217385 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:42:51.597987 1217385 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:42:51.598085 1217385 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:42:51.598473 1217385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:51.598517 1217385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:51.614520 1217385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38973
	I0414 14:42:51.615068 1217385 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:51.615584 1217385 main.go:141] libmachine: Using API Version  1
	I0414 14:42:51.615606 1217385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:51.615968 1217385 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:51.616188 1217385 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:42:51.616421 1217385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:42:51.616447 1217385 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:42:51.619501 1217385 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:42:51.619910 1217385 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:42:51.619971 1217385 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:42:51.620167 1217385 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:42:51.620356 1217385 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:42:51.620495 1217385 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:42:51.620616 1217385 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:42:51.698646 1217385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:42:51.711675 1217385 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.130169792s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24e6d7cfe7ea4       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       13 minutes ago      Running             coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       13 minutes ago      Running             coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	922f97d06563e       6e38f40d628db       13 minutes ago      Running             storage-provisioner       0                   4de376d34ee7f       storage-provisioner
	2df8ccb8d6ed9       df3849d954c98       13 minutes ago      Running             kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       13 minutes ago      Running             kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	9914f8879fc43       6ff023a402a69       13 minutes ago      Running             kube-vip                  0                   7b4e857fc4a72       kube-vip-ha-290859
	8263b35014337       b6a454c5a800d       13 minutes ago      Running             kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       13 minutes ago      Running             kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       13 minutes ago      Running             etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       13 minutes ago      Running             kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.131928214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,}"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218617705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218691310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218706805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218958691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.281907696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,} returns sandbox id \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\""
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.284050999Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.401970091Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.404464641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.406415797Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.409920833Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411266903Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.127171694s"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411378057Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.414728181Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.437197602Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.439640223Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.489937462Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:42:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    357ae105-a7f9-47b1-bf31-1c1aadedfe92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-290859 status is now: NodeReady
	
	
	Name:               ha-290859-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_14T14_42_30_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:42:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:42:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-290859-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96e9da9bd9e1490583702338b88b0c23
	  System UUID:                96e9da9b-d9e1-4905-8370-2338b88b0c23
	  Boot ID:                    b2600615-03c7-4984-8138-73f9baedc04e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8bg2x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-4jz25               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23s
	  kube-system                 kube-proxy-sp56w            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)  kubelet          Node ha-290859-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                node-controller  Node ha-290859-m03 event: Registered Node ha-290859-m03 in Controller
	  Normal  NodeReady                3s                 kubelet          Node ha-290859-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.058894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
	[  +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.061817] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.054755] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.092655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133260] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
	[Apr14 14:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.939433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.940639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940850Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	{"level":"info","ts":"2025-04-14T14:42:12.425594Z","caller":"traceutil/trace.go:171","msg":"trace[593749251] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1973; }","duration":"103.549581ms","start":"2025-04-14T14:42:12.322004Z","end":"2025-04-14T14:42:12.425554Z","steps":["trace[593749251] 'read index received'  (duration: 102.720139ms)","trace[593749251] 'applied index is now lower than readState.Index'  (duration: 828.805µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:42:12.426144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.759593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:42:12.426196Z","caller":"traceutil/trace.go:171","msg":"trace[257637869] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1805; }","duration":"104.23976ms","start":"2025-04-14T14:42:12.321948Z","end":"2025-04-14T14:42:12.426188Z","steps":["trace[257637869] 'agreement among raft nodes before linearized reading'  (duration: 103.769974ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:42:12.425685Z","caller":"traceutil/trace.go:171","msg":"trace[874985590] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"128.996586ms","start":"2025-04-14T14:42:12.296675Z","end":"2025-04-14T14:42:12.425672Z","steps":["trace[874985590] 'process raft request'  (duration: 128.079961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:42:29.811595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.362023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932452365827166964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:25989634b465d2f3>","response":"size:42"}
	
	
	==> kernel <==
	 14:42:52 up 14 min,  0 users,  load average: 0.17, 0.20, 0.11
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:41:14.504432       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:14.504715       1 main.go:301] handling current node
	I0414 14:41:24.505571       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:24.505635       1 main.go:301] handling current node
	I0414 14:41:34.500339       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:34.500416       1 main.go:301] handling current node
	I0414 14:41:44.500407       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:44.500557       1 main.go:301] handling current node
	I0414 14:41:54.509039       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:54.509064       1 main.go:301] handling current node
	I0414 14:42:04.509599       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:04.509640       1 main.go:301] handling current node
	I0414 14:42:14.505184       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:14.505543       1 main.go:301] handling current node
	I0414 14:42:24.502960       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:24.503004       1 main.go:301] handling current node
	I0414 14:42:34.500754       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:34.501033       1 main.go:301] handling current node
	I0414 14:42:34.501166       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:34.501231       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:42:34.501702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.112 Flags: [] Table: 0 Realm: 0} 
	I0414 14:42:44.500437       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:44.500523       1 main.go:301] handling current node
	I0414 14:42:44.500540       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:44.500545       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:30:03.843786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="5.465875ms"
	I0414 14:30:03.844627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="57.422µs"
	I0414 14:30:26.371478       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:37:12.908997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:20.033463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:29.935163       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-290859-m03\" does not exist"
	I0414 14:42:29.948852       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-290859-m03" podCIDRs=["10.244.1.0/24"]
	I0414 14:42:29.949152       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.949831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.958386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="234.248µs"
	I0414 14:42:29.963750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.969981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.002µs"
	I0414 14:42:30.275380       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:30.614411       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:33.964410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-290859-m03"
	I0414 14:42:34.046665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:39.961881       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.191468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-290859-m03"
	I0414 14:42:49.192361       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.201252       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.216690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.679µs"
	I0414 14:42:49.217122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.948µs"
	I0414 14:42:49.230018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.053µs"
	I0414 14:42:52.664944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.387962ms"
	I0414 14:42:52.665652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.546µs"
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:38:25 ha-290859 kubelet[1300]: E0414 14:38:25.691874    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:39:25 ha-290859 kubelet[1300]: E0414 14:39:25.692811    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:40:25 ha-290859 kubelet[1300]: E0414 14:40:25.693003    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:41:25 ha-290859 kubelet[1300]: E0414 14:41:25.692589    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:42:25 ha-290859 kubelet[1300]: E0414 14:42:25.692394    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:42:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m27s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  15s (x2 over 24s)    default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  4s                   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (53.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-290859" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-290859\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-290859\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disab
leDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.2\",\"ClusterName\":\"ha-290859\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"containerd\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.110\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntim
e\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.111\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.112\",\"Port\":0,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod
-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\"
:false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-290859" in json of 'profile list' to have "HAppy" status but have "OK" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-290859\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-290859\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null
,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.2\",\"ClusterName\":\"ha-290859\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"containerd\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.110\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"Contain
erRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.111\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.112\",\"Port\":0,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":fal
se,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableM
etrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.109486802s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24e6d7cfe7ea4       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       13 minutes ago      Running             coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       13 minutes ago      Running             coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	922f97d06563e       6e38f40d628db       13 minutes ago      Running             storage-provisioner       0                   4de376d34ee7f       storage-provisioner
	2df8ccb8d6ed9       df3849d954c98       13 minutes ago      Running             kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       13 minutes ago      Running             kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	9914f8879fc43       6ff023a402a69       13 minutes ago      Running             kube-vip                  0                   7b4e857fc4a72       kube-vip-ha-290859
	8263b35014337       b6a454c5a800d       13 minutes ago      Running             kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       13 minutes ago      Running             kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       13 minutes ago      Running             etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       13 minutes ago      Running             kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.131928214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,}"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218617705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218691310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218706805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218958691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.281907696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,} returns sandbox id \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\""
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.284050999Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.401970091Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.404464641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.406415797Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.409920833Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411266903Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.127171694s"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411378057Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.414728181Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.437197602Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.439640223Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.489937462Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:42:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    357ae105-a7f9-47b1-bf31-1c1aadedfe92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-290859 status is now: NodeReady
	
	
	Name:               ha-290859-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_14T14_42_30_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:42:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:42:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-290859-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96e9da9bd9e1490583702338b88b0c23
	  System UUID:                96e9da9b-d9e1-4905-8370-2338b88b0c23
	  Boot ID:                    b2600615-03c7-4984-8138-73f9baedc04e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8bg2x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-4jz25               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26s
	  kube-system                 kube-proxy-sp56w            0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientMemory  26s (x2 over 26s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x2 over 26s)  kubelet          Node ha-290859-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x2 over 26s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22s                node-controller  Node ha-290859-m03 event: Registered Node ha-290859-m03 in Controller
	  Normal  NodeReady                6s                 kubelet          Node ha-290859-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.058894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
	[  +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.061817] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.054755] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.092655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133260] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
	[Apr14 14:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.939433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.940639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940850Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	{"level":"info","ts":"2025-04-14T14:42:12.425594Z","caller":"traceutil/trace.go:171","msg":"trace[593749251] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1973; }","duration":"103.549581ms","start":"2025-04-14T14:42:12.322004Z","end":"2025-04-14T14:42:12.425554Z","steps":["trace[593749251] 'read index received'  (duration: 102.720139ms)","trace[593749251] 'applied index is now lower than readState.Index'  (duration: 828.805µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:42:12.426144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.759593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:42:12.426196Z","caller":"traceutil/trace.go:171","msg":"trace[257637869] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1805; }","duration":"104.23976ms","start":"2025-04-14T14:42:12.321948Z","end":"2025-04-14T14:42:12.426188Z","steps":["trace[257637869] 'agreement among raft nodes before linearized reading'  (duration: 103.769974ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:42:12.425685Z","caller":"traceutil/trace.go:171","msg":"trace[874985590] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"128.996586ms","start":"2025-04-14T14:42:12.296675Z","end":"2025-04-14T14:42:12.425672Z","steps":["trace[874985590] 'process raft request'  (duration: 128.079961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:42:29.811595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.362023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932452365827166964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:25989634b465d2f3>","response":"size:42"}
	
	
	==> kernel <==
	 14:42:55 up 14 min,  0 users,  load average: 0.16, 0.19, 0.11
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:41:34.500339       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:34.500416       1 main.go:301] handling current node
	I0414 14:41:44.500407       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:44.500557       1 main.go:301] handling current node
	I0414 14:41:54.509039       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:54.509064       1 main.go:301] handling current node
	I0414 14:42:04.509599       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:04.509640       1 main.go:301] handling current node
	I0414 14:42:14.505184       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:14.505543       1 main.go:301] handling current node
	I0414 14:42:24.502960       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:24.503004       1 main.go:301] handling current node
	I0414 14:42:34.500754       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:34.501033       1 main.go:301] handling current node
	I0414 14:42:34.501166       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:34.501231       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:42:34.501702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.112 Flags: [] Table: 0 Realm: 0} 
	I0414 14:42:44.500437       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:44.500523       1 main.go:301] handling current node
	I0414 14:42:44.500540       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:44.500545       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:42:54.501089       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:54.501145       1 main.go:301] handling current node
	I0414 14:42:54.501166       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:54.501175       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:30:03.844627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="57.422µs"
	I0414 14:30:26.371478       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:37:12.908997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:20.033463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:29.935163       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-290859-m03\" does not exist"
	I0414 14:42:29.948852       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-290859-m03" podCIDRs=["10.244.1.0/24"]
	I0414 14:42:29.949152       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.949831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.958386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="234.248µs"
	I0414 14:42:29.963750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.969981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.002µs"
	I0414 14:42:30.275380       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:30.614411       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:33.964410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-290859-m03"
	I0414 14:42:34.046665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:39.961881       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.191468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-290859-m03"
	I0414 14:42:49.192361       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.201252       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.216690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.679µs"
	I0414 14:42:49.217122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.948µs"
	I0414 14:42:49.230018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.053µs"
	I0414 14:42:52.664944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.387962ms"
	I0414 14:42:52.665652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.546µs"
	I0414 14:42:53.979890       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:38:25 ha-290859 kubelet[1300]: E0414 14:38:25.691874    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:39:25 ha-290859 kubelet[1300]: E0414 14:39:25.692811    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:40:25 ha-290859 kubelet[1300]: E0414 14:40:25.693003    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:41:25 ha-290859 kubelet[1300]: E0414 14:41:25.692589    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:42:25 ha-290859 kubelet[1300]: E0414 14:42:25.692394    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:42:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m30s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  18s (x2 over 27s)    default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  7s                   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (2.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status --output json -v=7 --alsologtostderr: exit status 2 (581.34339ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-290859","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-290859-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-290859-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:42:56.168463 1217765 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:42:56.168734 1217765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:42:56.168744 1217765 out.go:358] Setting ErrFile to fd 2...
	I0414 14:42:56.168747 1217765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:42:56.168921 1217765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:42:56.169094 1217765 out.go:352] Setting JSON to true
	I0414 14:42:56.169130 1217765 mustload.go:65] Loading cluster: ha-290859
	I0414 14:42:56.169230 1217765 notify.go:220] Checking for updates...
	I0414 14:42:56.169580 1217765 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:42:56.169614 1217765 status.go:174] checking status of ha-290859 ...
	I0414 14:42:56.170158 1217765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:56.170219 1217765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:56.186377 1217765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I0414 14:42:56.186827 1217765 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:56.187525 1217765 main.go:141] libmachine: Using API Version  1
	I0414 14:42:56.187557 1217765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:56.187922 1217765 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:56.188130 1217765 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:42:56.189646 1217765 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:42:56.189666 1217765 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:42:56.189962 1217765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:56.190021 1217765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:56.206868 1217765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0414 14:42:56.207391 1217765 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:56.207890 1217765 main.go:141] libmachine: Using API Version  1
	I0414 14:42:56.207911 1217765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:56.208204 1217765 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:56.208389 1217765 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:42:56.211333 1217765 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:42:56.211687 1217765 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:42:56.211714 1217765 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:42:56.211852 1217765 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:42:56.212179 1217765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:56.212229 1217765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:56.228411 1217765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45667
	I0414 14:42:56.228874 1217765 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:56.229318 1217765 main.go:141] libmachine: Using API Version  1
	I0414 14:42:56.229339 1217765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:56.229682 1217765 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:56.229963 1217765 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:42:56.230174 1217765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:42:56.230213 1217765 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:42:56.233131 1217765 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:42:56.233607 1217765 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:42:56.233641 1217765 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:42:56.233731 1217765 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:42:56.233930 1217765 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:42:56.234110 1217765 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:42:56.234273 1217765 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:42:56.318874 1217765 ssh_runner.go:195] Run: systemctl --version
	I0414 14:42:56.325630 1217765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:42:56.339703 1217765 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:42:56.339750 1217765 api_server.go:166] Checking apiserver status ...
	I0414 14:42:56.339806 1217765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:42:56.353357 1217765 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:42:56.362802 1217765 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:42:56.362893 1217765 ssh_runner.go:195] Run: ls
	I0414 14:42:56.367316 1217765 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:42:56.371431 1217765 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:42:56.371457 1217765 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:42:56.371468 1217765 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:42:56.371483 1217765 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:42:56.371785 1217765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:56.371837 1217765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:56.388097 1217765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34749
	I0414 14:42:56.388623 1217765 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:56.389057 1217765 main.go:141] libmachine: Using API Version  1
	I0414 14:42:56.389080 1217765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:56.389442 1217765 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:56.389632 1217765 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:42:56.391154 1217765 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:42:56.391180 1217765 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:42:56.391519 1217765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:56.391570 1217765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:56.409836 1217765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I0414 14:42:56.410424 1217765 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:56.410949 1217765 main.go:141] libmachine: Using API Version  1
	I0414 14:42:56.410984 1217765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:56.411406 1217765 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:56.411635 1217765 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:42:56.414428 1217765 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:42:56.414869 1217765 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:42:56.414893 1217765 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:42:56.415038 1217765 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:42:56.415402 1217765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:56.415455 1217765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:56.432052 1217765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0414 14:42:56.432573 1217765 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:56.433130 1217765 main.go:141] libmachine: Using API Version  1
	I0414 14:42:56.433155 1217765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:56.433504 1217765 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:56.433701 1217765 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:42:56.433973 1217765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:42:56.434005 1217765 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:42:56.437177 1217765 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:42:56.437597 1217765 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:42:56.437631 1217765 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:42:56.437765 1217765 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:42:56.437950 1217765 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:42:56.438105 1217765 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:42:56.438303 1217765 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:42:56.518037 1217765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:42:56.532291 1217765 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:42:56.532327 1217765 api_server.go:166] Checking apiserver status ...
	I0414 14:42:56.532367 1217765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:42:56.544061 1217765 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:42:56.544092 1217765 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:42:56.544108 1217765 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:42:56.544131 1217765 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:42:56.544541 1217765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:56.544592 1217765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:56.561441 1217765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0414 14:42:56.561962 1217765 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:56.562411 1217765 main.go:141] libmachine: Using API Version  1
	I0414 14:42:56.562436 1217765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:56.562795 1217765 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:56.562998 1217765 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:42:56.564840 1217765 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:42:56.564862 1217765 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:42:56.565165 1217765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:56.565201 1217765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:56.580655 1217765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I0414 14:42:56.581121 1217765 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:56.581634 1217765 main.go:141] libmachine: Using API Version  1
	I0414 14:42:56.581656 1217765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:56.582064 1217765 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:56.582272 1217765 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:42:56.585057 1217765 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:42:56.585504 1217765 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:42:56.585545 1217765 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:42:56.585674 1217765 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:42:56.586144 1217765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:56.586195 1217765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:56.603155 1217765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
	I0414 14:42:56.603754 1217765 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:56.604270 1217765 main.go:141] libmachine: Using API Version  1
	I0414 14:42:56.604295 1217765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:56.604723 1217765 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:56.604962 1217765 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:42:56.605158 1217765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:42:56.605182 1217765 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:42:56.608075 1217765 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:42:56.608491 1217765 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:42:56.608513 1217765 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:42:56.608670 1217765 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:42:56.608853 1217765 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:42:56.608999 1217765 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:42:56.609121 1217765 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:42:56.686543 1217765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:42:56.700205 1217765 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-290859 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.179269121s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24e6d7cfe7ea4       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       13 minutes ago      Running             coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       13 minutes ago      Running             coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	922f97d06563e       6e38f40d628db       13 minutes ago      Running             storage-provisioner       0                   4de376d34ee7f       storage-provisioner
	2df8ccb8d6ed9       df3849d954c98       13 minutes ago      Running             kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       13 minutes ago      Running             kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	9914f8879fc43       6ff023a402a69       13 minutes ago      Running             kube-vip                  0                   7b4e857fc4a72       kube-vip-ha-290859
	8263b35014337       b6a454c5a800d       13 minutes ago      Running             kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       13 minutes ago      Running             kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       13 minutes ago      Running             etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       13 minutes ago      Running             kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.131928214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,}"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218617705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218691310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218706805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218958691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.281907696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,} returns sandbox id \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\""
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.284050999Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.401970091Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.404464641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.406415797Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.409920833Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411266903Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.127171694s"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411378057Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.414728181Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.437197602Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.439640223Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.489937462Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:42:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    357ae105-a7f9-47b1-bf31-1c1aadedfe92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-290859 status is now: NodeReady
	
	
	Name:               ha-290859-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_14T14_42_30_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:42:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:42:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:42:49 +0000   Mon, 14 Apr 2025 14:42:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-290859-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96e9da9bd9e1490583702338b88b0c23
	  System UUID:                96e9da9b-d9e1-4905-8370-2338b88b0c23
	  Boot ID:                    b2600615-03c7-4984-8138-73f9baedc04e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8bg2x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-4jz25               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28s
	  kube-system                 kube-proxy-sp56w            0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  28s (x2 over 28s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x2 over 28s)  kubelet          Node ha-290859-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x2 over 28s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24s                node-controller  Node ha-290859-m03 event: Registered Node ha-290859-m03 in Controller
	  Normal  NodeReady                8s                 kubelet          Node ha-290859-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.058894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
	[  +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.061817] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.054755] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.092655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133260] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
	[Apr14 14:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.939433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.940639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940850Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	{"level":"info","ts":"2025-04-14T14:42:12.425594Z","caller":"traceutil/trace.go:171","msg":"trace[593749251] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1973; }","duration":"103.549581ms","start":"2025-04-14T14:42:12.322004Z","end":"2025-04-14T14:42:12.425554Z","steps":["trace[593749251] 'read index received'  (duration: 102.720139ms)","trace[593749251] 'applied index is now lower than readState.Index'  (duration: 828.805µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:42:12.426144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.759593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:42:12.426196Z","caller":"traceutil/trace.go:171","msg":"trace[257637869] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1805; }","duration":"104.23976ms","start":"2025-04-14T14:42:12.321948Z","end":"2025-04-14T14:42:12.426188Z","steps":["trace[257637869] 'agreement among raft nodes before linearized reading'  (duration: 103.769974ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:42:12.425685Z","caller":"traceutil/trace.go:171","msg":"trace[874985590] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"128.996586ms","start":"2025-04-14T14:42:12.296675Z","end":"2025-04-14T14:42:12.425672Z","steps":["trace[874985590] 'process raft request'  (duration: 128.079961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:42:29.811595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.362023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932452365827166964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:25989634b465d2f3>","response":"size:42"}
	
	
	==> kernel <==
	 14:42:57 up 14 min,  0 users,  load average: 0.16, 0.19, 0.11
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:41:34.500339       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:34.500416       1 main.go:301] handling current node
	I0414 14:41:44.500407       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:44.500557       1 main.go:301] handling current node
	I0414 14:41:54.509039       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:54.509064       1 main.go:301] handling current node
	I0414 14:42:04.509599       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:04.509640       1 main.go:301] handling current node
	I0414 14:42:14.505184       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:14.505543       1 main.go:301] handling current node
	I0414 14:42:24.502960       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:24.503004       1 main.go:301] handling current node
	I0414 14:42:34.500754       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:34.501033       1 main.go:301] handling current node
	I0414 14:42:34.501166       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:34.501231       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:42:34.501702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.112 Flags: [] Table: 0 Realm: 0} 
	I0414 14:42:44.500437       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:44.500523       1 main.go:301] handling current node
	I0414 14:42:44.500540       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:44.500545       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:42:54.501089       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:54.501145       1 main.go:301] handling current node
	I0414 14:42:54.501166       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:54.501175       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:30:03.844627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="57.422µs"
	I0414 14:30:26.371478       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:37:12.908997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:20.033463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:29.935163       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-290859-m03\" does not exist"
	I0414 14:42:29.948852       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-290859-m03" podCIDRs=["10.244.1.0/24"]
	I0414 14:42:29.949152       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.949831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.958386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="234.248µs"
	I0414 14:42:29.963750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.969981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.002µs"
	I0414 14:42:30.275380       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:30.614411       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:33.964410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-290859-m03"
	I0414 14:42:34.046665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:39.961881       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.191468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-290859-m03"
	I0414 14:42:49.192361       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.201252       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.216690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.679µs"
	I0414 14:42:49.217122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.948µs"
	I0414 14:42:49.230018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.053µs"
	I0414 14:42:52.664944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.387962ms"
	I0414 14:42:52.665652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.546µs"
	I0414 14:42:53.979890       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:38:25 ha-290859 kubelet[1300]: E0414 14:38:25.691874    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:39:25 ha-290859 kubelet[1300]: E0414 14:39:25.692811    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:40:25 ha-290859 kubelet[1300]: E0414 14:40:25.693003    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:41:25 ha-290859 kubelet[1300]: E0414 14:41:25.692589    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:42:25 ha-290859 kubelet[1300]: E0414 14:42:25.692394    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:42:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/CopyFile]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m32s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  20s (x2 over 29s)    default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9s                   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (2.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (3.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 node stop m02 -v=7 --alsologtostderr
E0414 14:42:59.575476 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 node stop m02 -v=7 --alsologtostderr: (1.287125531s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 7 (438.050472ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:42:59.949793 1217995 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:42:59.950088 1217995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:42:59.950100 1217995 out.go:358] Setting ErrFile to fd 2...
	I0414 14:42:59.950104 1217995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:42:59.950339 1217995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:42:59.950516 1217995 out.go:352] Setting JSON to false
	I0414 14:42:59.950552 1217995 mustload.go:65] Loading cluster: ha-290859
	I0414 14:42:59.950604 1217995 notify.go:220] Checking for updates...
	I0414 14:42:59.951118 1217995 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:42:59.951149 1217995 status.go:174] checking status of ha-290859 ...
	I0414 14:42:59.951780 1217995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:59.951851 1217995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:59.973876 1217995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35651
	I0414 14:42:59.974437 1217995 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:59.975005 1217995 main.go:141] libmachine: Using API Version  1
	I0414 14:42:59.975028 1217995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:59.975480 1217995 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:59.975698 1217995 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:42:59.977489 1217995 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:42:59.977509 1217995 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:42:59.977986 1217995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:59.978045 1217995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:42:59.993685 1217995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0414 14:42:59.994155 1217995 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:42:59.994615 1217995 main.go:141] libmachine: Using API Version  1
	I0414 14:42:59.994637 1217995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:42:59.995025 1217995 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:42:59.995193 1217995 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:42:59.998075 1217995 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:42:59.998483 1217995 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:42:59.998526 1217995 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:42:59.998631 1217995 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:42:59.998920 1217995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:42:59.998958 1217995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:43:00.015385 1217995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37923
	I0414 14:43:00.015823 1217995 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:43:00.016306 1217995 main.go:141] libmachine: Using API Version  1
	I0414 14:43:00.016336 1217995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:43:00.016701 1217995 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:43:00.016919 1217995 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:43:00.017109 1217995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:43:00.017147 1217995 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:43:00.020222 1217995 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:43:00.020730 1217995 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:43:00.020768 1217995 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:43:00.020940 1217995 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:43:00.021110 1217995 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:43:00.021257 1217995 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:43:00.021429 1217995 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:43:00.102610 1217995 ssh_runner.go:195] Run: systemctl --version
	I0414 14:43:00.108167 1217995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:43:00.127778 1217995 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:43:00.127833 1217995 api_server.go:166] Checking apiserver status ...
	I0414 14:43:00.127870 1217995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:43:00.140744 1217995 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:43:00.150035 1217995 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:43:00.150091 1217995 ssh_runner.go:195] Run: ls
	I0414 14:43:00.154296 1217995 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:43:00.159425 1217995 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:43:00.159457 1217995 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:43:00.159470 1217995 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:43:00.159485 1217995 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:43:00.159910 1217995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:43:00.159960 1217995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:43:00.176487 1217995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I0414 14:43:00.176996 1217995 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:43:00.177451 1217995 main.go:141] libmachine: Using API Version  1
	I0414 14:43:00.177476 1217995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:43:00.177824 1217995 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:43:00.178135 1217995 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:43:00.179784 1217995 status.go:371] ha-290859-m02 host status = "Stopped" (err=<nil>)
	I0414 14:43:00.179798 1217995 status.go:384] host is not running, skipping remaining checks
	I0414 14:43:00.179805 1217995 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:43:00.179825 1217995 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:43:00.180134 1217995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:43:00.180173 1217995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:43:00.196224 1217995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0414 14:43:00.196708 1217995 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:43:00.197173 1217995 main.go:141] libmachine: Using API Version  1
	I0414 14:43:00.197198 1217995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:43:00.197559 1217995 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:43:00.197823 1217995 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:43:00.199370 1217995 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:43:00.199391 1217995 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:43:00.199664 1217995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:43:00.199702 1217995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:43:00.216255 1217995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0414 14:43:00.216799 1217995 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:43:00.217379 1217995 main.go:141] libmachine: Using API Version  1
	I0414 14:43:00.217403 1217995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:43:00.217741 1217995 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:43:00.217904 1217995 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:43:00.220851 1217995 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:43:00.221288 1217995 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:43:00.221318 1217995 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:43:00.221441 1217995 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:43:00.221733 1217995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:43:00.221787 1217995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:43:00.238080 1217995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34205
	I0414 14:43:00.238661 1217995 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:43:00.239136 1217995 main.go:141] libmachine: Using API Version  1
	I0414 14:43:00.239158 1217995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:43:00.239554 1217995 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:43:00.239770 1217995 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:43:00.239959 1217995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:43:00.239987 1217995 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:43:00.243027 1217995 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:43:00.243622 1217995 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:43:00.243662 1217995 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:43:00.243834 1217995 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:43:00.244031 1217995 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:43:00.244187 1217995 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:43:00.244351 1217995 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:43:00.322364 1217995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:43:00.336816 1217995 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr": ha-290859
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-290859-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-290859-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr": ha-290859
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-290859-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-290859-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr": ha-290859
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-290859-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-290859-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr": ha-290859
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-290859-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-290859-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.130240297s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node stop m02 -v=7         | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24e6d7cfe7ea4       8c811b4aec35f       12 minutes ago      Running             busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       13 minutes ago      Running             coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       13 minutes ago      Running             coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	922f97d06563e       6e38f40d628db       13 minutes ago      Running             storage-provisioner       0                   4de376d34ee7f       storage-provisioner
	2df8ccb8d6ed9       df3849d954c98       13 minutes ago      Running             kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       13 minutes ago      Running             kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	9914f8879fc43       6ff023a402a69       13 minutes ago      Running             kube-vip                  0                   7b4e857fc4a72       kube-vip-ha-290859
	8263b35014337       b6a454c5a800d       13 minutes ago      Running             kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       13 minutes ago      Running             kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       13 minutes ago      Running             etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       13 minutes ago      Running             kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.131928214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,}"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218617705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218691310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218706805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218958691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.281907696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,} returns sandbox id \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\""
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.284050999Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.401970091Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.404464641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.406415797Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.409920833Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411266903Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.127171694s"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411378057Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.414728181Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.437197602Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.439640223Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.489937462Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:42:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    357ae105-a7f9-47b1-bf31-1c1aadedfe92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-290859 status is now: NodeReady
	
	
	Name:               ha-290859-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_14T14_42_30_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:42:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:43:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:43:00 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:43:00 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:43:00 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:43:00 +0000   Mon, 14 Apr 2025 14:42:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-290859-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96e9da9bd9e1490583702338b88b0c23
	  System UUID:                96e9da9b-d9e1-4905-8370-2338b88b0c23
	  Boot ID:                    b2600615-03c7-4984-8138-73f9baedc04e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8bg2x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kindnet-4jz25               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      32s
	  kube-system                 kube-proxy-sp56w            0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  32s (x2 over 32s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x2 over 32s)  kubelet          Node ha-290859-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x2 over 32s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28s                node-controller  Node ha-290859-m03 event: Registered Node ha-290859-m03 in Controller
	  Normal  NodeReady                12s                kubelet          Node ha-290859-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.058894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
	[  +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.061817] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.054755] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.092655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133260] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
	[Apr14 14:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.939433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.940639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940850Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	{"level":"info","ts":"2025-04-14T14:42:12.425594Z","caller":"traceutil/trace.go:171","msg":"trace[593749251] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1973; }","duration":"103.549581ms","start":"2025-04-14T14:42:12.322004Z","end":"2025-04-14T14:42:12.425554Z","steps":["trace[593749251] 'read index received'  (duration: 102.720139ms)","trace[593749251] 'applied index is now lower than readState.Index'  (duration: 828.805µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:42:12.426144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.759593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:42:12.426196Z","caller":"traceutil/trace.go:171","msg":"trace[257637869] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1805; }","duration":"104.23976ms","start":"2025-04-14T14:42:12.321948Z","end":"2025-04-14T14:42:12.426188Z","steps":["trace[257637869] 'agreement among raft nodes before linearized reading'  (duration: 103.769974ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:42:12.425685Z","caller":"traceutil/trace.go:171","msg":"trace[874985590] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"128.996586ms","start":"2025-04-14T14:42:12.296675Z","end":"2025-04-14T14:42:12.425672Z","steps":["trace[874985590] 'process raft request'  (duration: 128.079961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:42:29.811595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.362023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932452365827166964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:25989634b465d2f3>","response":"size:42"}
	
	
	==> kernel <==
	 14:43:01 up 14 min,  0 users,  load average: 0.14, 0.19, 0.11
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:41:34.500339       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:34.500416       1 main.go:301] handling current node
	I0414 14:41:44.500407       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:44.500557       1 main.go:301] handling current node
	I0414 14:41:54.509039       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:54.509064       1 main.go:301] handling current node
	I0414 14:42:04.509599       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:04.509640       1 main.go:301] handling current node
	I0414 14:42:14.505184       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:14.505543       1 main.go:301] handling current node
	I0414 14:42:24.502960       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:24.503004       1 main.go:301] handling current node
	I0414 14:42:34.500754       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:34.501033       1 main.go:301] handling current node
	I0414 14:42:34.501166       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:34.501231       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:42:34.501702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.112 Flags: [] Table: 0 Realm: 0} 
	I0414 14:42:44.500437       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:44.500523       1 main.go:301] handling current node
	I0414 14:42:44.500540       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:44.500545       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:42:54.501089       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:54.501145       1 main.go:301] handling current node
	I0414 14:42:54.501166       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:54.501175       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:30:26.371478       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:37:12.908997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:20.033463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:29.935163       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-290859-m03\" does not exist"
	I0414 14:42:29.948852       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-290859-m03" podCIDRs=["10.244.1.0/24"]
	I0414 14:42:29.949152       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.949831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.958386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="234.248µs"
	I0414 14:42:29.963750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.969981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.002µs"
	I0414 14:42:30.275380       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:30.614411       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:33.964410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-290859-m03"
	I0414 14:42:34.046665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:39.961881       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.191468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-290859-m03"
	I0414 14:42:49.192361       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.201252       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.216690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.679µs"
	I0414 14:42:49.217122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.948µs"
	I0414 14:42:49.230018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.053µs"
	I0414 14:42:52.664944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.387962ms"
	I0414 14:42:52.665652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.546µs"
	I0414 14:42:53.979890       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:43:00.010906       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:38:25 ha-290859 kubelet[1300]: E0414 14:38:25.691874    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:39:25 ha-290859 kubelet[1300]: E0414 14:39:25.692811    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:40:25 ha-290859 kubelet[1300]: E0414 14:40:25.693003    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:41:25 ha-290859 kubelet[1300]: E0414 14:41:25.692589    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:42:25 ha-290859 kubelet[1300]: E0414 14:42:25.692394    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:42:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m36s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  24s (x2 over 33s)    default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  3s (x2 over 13s)     default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (3.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-290859" in json of 'profile list' to have "Degraded" status but have "OK" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-290859\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-290859\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":n
ull,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.2\",\"ClusterName\":\"ha-290859\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"containerd\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.110\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"Cont
ainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.111\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.112\",\"Port\":0,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":
false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"Disab
leMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.148949598s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node stop m02 -v=7         | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24e6d7cfe7ea4       8c811b4aec35f       13 minutes ago      Running             busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       13 minutes ago      Running             coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       13 minutes ago      Running             coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	922f97d06563e       6e38f40d628db       13 minutes ago      Running             storage-provisioner       0                   4de376d34ee7f       storage-provisioner
	2df8ccb8d6ed9       df3849d954c98       13 minutes ago      Running             kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       13 minutes ago      Running             kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	9914f8879fc43       6ff023a402a69       13 minutes ago      Running             kube-vip                  0                   7b4e857fc4a72       kube-vip-ha-290859
	8263b35014337       b6a454c5a800d       13 minutes ago      Running             kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       13 minutes ago      Running             kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       13 minutes ago      Running             etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       13 minutes ago      Running             kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.131928214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,}"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218617705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218691310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218706805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218958691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.281907696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,} returns sandbox id \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\""
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.284050999Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.401970091Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.404464641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.406415797Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.409920833Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411266903Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.127171694s"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411378057Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.414728181Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.437197602Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.439640223Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.489937462Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:43:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:42:20 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    357ae105-a7f9-47b1-bf31-1c1aadedfe92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-290859 status is now: NodeReady
	
	
	Name:               ha-290859-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_14T14_42_30_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:42:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:43:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:43:00 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:43:00 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:43:00 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:43:00 +0000   Mon, 14 Apr 2025 14:42:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-290859-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96e9da9bd9e1490583702338b88b0c23
	  System UUID:                96e9da9b-d9e1-4905-8370-2338b88b0c23
	  Boot ID:                    b2600615-03c7-4984-8138-73f9baedc04e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8bg2x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kindnet-4jz25               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      34s
	  kube-system                 kube-proxy-sp56w            0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  34s (x2 over 34s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x2 over 34s)  kubelet          Node ha-290859-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x2 over 34s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           30s                node-controller  Node ha-290859-m03 event: Registered Node ha-290859-m03 in Controller
	  Normal  NodeReady                14s                kubelet          Node ha-290859-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.058894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
	[  +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.061817] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.054755] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.092655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133260] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
	[Apr14 14:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.939433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:29:20.940639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940850Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	{"level":"info","ts":"2025-04-14T14:42:12.425594Z","caller":"traceutil/trace.go:171","msg":"trace[593749251] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1973; }","duration":"103.549581ms","start":"2025-04-14T14:42:12.322004Z","end":"2025-04-14T14:42:12.425554Z","steps":["trace[593749251] 'read index received'  (duration: 102.720139ms)","trace[593749251] 'applied index is now lower than readState.Index'  (duration: 828.805µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:42:12.426144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.759593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:42:12.426196Z","caller":"traceutil/trace.go:171","msg":"trace[257637869] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1805; }","duration":"104.23976ms","start":"2025-04-14T14:42:12.321948Z","end":"2025-04-14T14:42:12.426188Z","steps":["trace[257637869] 'agreement among raft nodes before linearized reading'  (duration: 103.769974ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:42:12.425685Z","caller":"traceutil/trace.go:171","msg":"trace[874985590] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"128.996586ms","start":"2025-04-14T14:42:12.296675Z","end":"2025-04-14T14:42:12.425672Z","steps":["trace[874985590] 'process raft request'  (duration: 128.079961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:42:29.811595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.362023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932452365827166964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:25989634b465d2f3>","response":"size:42"}
	
	
	==> kernel <==
	 14:43:03 up 14 min,  0 users,  load average: 0.14, 0.19, 0.11
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:41:34.500339       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:34.500416       1 main.go:301] handling current node
	I0414 14:41:44.500407       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:44.500557       1 main.go:301] handling current node
	I0414 14:41:54.509039       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:41:54.509064       1 main.go:301] handling current node
	I0414 14:42:04.509599       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:04.509640       1 main.go:301] handling current node
	I0414 14:42:14.505184       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:14.505543       1 main.go:301] handling current node
	I0414 14:42:24.502960       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:24.503004       1 main.go:301] handling current node
	I0414 14:42:34.500754       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:34.501033       1 main.go:301] handling current node
	I0414 14:42:34.501166       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:34.501231       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:42:34.501702       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.112 Flags: [] Table: 0 Realm: 0} 
	I0414 14:42:44.500437       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:44.500523       1 main.go:301] handling current node
	I0414 14:42:44.500540       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:44.500545       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:42:54.501089       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:42:54.501145       1 main.go:301] handling current node
	I0414 14:42:54.501166       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:42:54.501175       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:30:26.371478       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:37:12.908997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:20.033463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:29.935163       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-290859-m03\" does not exist"
	I0414 14:42:29.948852       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-290859-m03" podCIDRs=["10.244.1.0/24"]
	I0414 14:42:29.949152       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.949831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.958386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="234.248µs"
	I0414 14:42:29.963750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.969981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.002µs"
	I0414 14:42:30.275380       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:30.614411       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:33.964410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-290859-m03"
	I0414 14:42:34.046665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:39.961881       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.191468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-290859-m03"
	I0414 14:42:49.192361       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.201252       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.216690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.679µs"
	I0414 14:42:49.217122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.948µs"
	I0414 14:42:49.230018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.053µs"
	I0414 14:42:52.664944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.387962ms"
	I0414 14:42:52.665652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.546µs"
	I0414 14:42:53.979890       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:43:00.010906       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:38:25 ha-290859 kubelet[1300]: E0414 14:38:25.691874    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:38:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:38:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:39:25 ha-290859 kubelet[1300]: E0414 14:39:25.692811    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:39:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:39:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:40:25 ha-290859 kubelet[1300]: E0414 14:40:25.693003    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:40:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:40:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:41:25 ha-290859 kubelet[1300]: E0414 14:41:25.692589    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:41:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:41:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:42:25 ha-290859 kubelet[1300]: E0414 14:42:25.692394    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:42:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:42:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m38s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  26s (x2 over 35s)    default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  5s (x2 over 15s)     default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (314.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 node start m02 -v=7 --alsologtostderr
E0414 14:45:58.679793 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 node start m02 -v=7 --alsologtostderr: exit status 80 (4m18.015480499s)

                                                
                                                
-- stdout --
	* Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	* Restarting existing kvm2 VM for "ha-290859-m02" ...
	* Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:43:04.623487 1218338 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:43:04.623603 1218338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:43:04.623613 1218338 out.go:358] Setting ErrFile to fd 2...
	I0414 14:43:04.623616 1218338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:43:04.623871 1218338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:43:04.624178 1218338 mustload.go:65] Loading cluster: ha-290859
	I0414 14:43:04.624584 1218338 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:43:04.625001 1218338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:43:04.625079 1218338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:43:04.642752 1218338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32823
	I0414 14:43:04.643353 1218338 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:43:04.643895 1218338 main.go:141] libmachine: Using API Version  1
	I0414 14:43:04.643924 1218338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:43:04.644325 1218338 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:43:04.644572 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	W0414 14:43:04.646354 1218338 host.go:58] "ha-290859-m02" host status: Stopped
	I0414 14:43:04.648488 1218338 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:43:04.649697 1218338 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:43:04.649750 1218338 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:43:04.649772 1218338 cache.go:56] Caching tarball of preloaded images
	I0414 14:43:04.649898 1218338 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:43:04.649911 1218338 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:43:04.650074 1218338 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:43:04.650320 1218338 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:43:04.650431 1218338 start.go:364] duration metric: took 37.1µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:43:04.650454 1218338 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:43:04.650465 1218338 fix.go:54] fixHost starting: m02
	I0414 14:43:04.650752 1218338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:43:04.650785 1218338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:43:04.666821 1218338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I0414 14:43:04.667391 1218338 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:43:04.667890 1218338 main.go:141] libmachine: Using API Version  1
	I0414 14:43:04.667912 1218338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:43:04.668346 1218338 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:43:04.668591 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:43:04.668797 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:43:04.670410 1218338 fix.go:112] recreateIfNeeded on ha-290859-m02: state=Stopped err=<nil>
	I0414 14:43:04.670462 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	W0414 14:43:04.670632 1218338 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:43:04.672592 1218338 out.go:177] * Restarting existing kvm2 VM for "ha-290859-m02" ...
	I0414 14:43:04.674013 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .Start
	I0414 14:43:04.674262 1218338 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:43:04.674284 1218338 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:43:04.675153 1218338 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:43:04.675736 1218338 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:43:04.676207 1218338 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:43:04.677099 1218338 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:43:05.925169 1218338 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:43:05.926204 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:05.926702 1218338 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:43:05.926729 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:05.926739 1218338 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:43:05.927368 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:05.927402 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"}
	I0414 14:43:05.927426 1218338 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:43:05.927440 1218338 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:43:05.927456 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:43:05.929590 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:05.930000 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:05.930045 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:05.930217 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:43:05.930275 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:43:05.930316 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:43:05.930334 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:43:05.930343 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:43:17.063533 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:43:17.063857 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:43:17.064504 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:43:17.067442 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.068023 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:17.068051 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.068279 1218338 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:43:17.068470 1218338 machine.go:93] provisionDockerMachine start ...
	I0414 14:43:17.068490 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:43:17.068735 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:43:17.070975 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.071289 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:17.071315 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.071465 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:43:17.071637 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:17.071772 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:17.071919 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:43:17.072074 1218338 main.go:141] libmachine: Using SSH client type: native
	I0414 14:43:17.072312 1218338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:43:17.072326 1218338 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:43:17.175355 1218338 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:43:17.175385 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:43:17.175656 1218338 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:43:17.175684 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:43:17.175873 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:43:17.178615 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.179092 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:17.179117 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.179351 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:43:17.179575 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:17.179793 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:17.179987 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:43:17.180198 1218338 main.go:141] libmachine: Using SSH client type: native
	I0414 14:43:17.180412 1218338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:43:17.180424 1218338 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:43:17.297488 1218338 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:43:17.297519 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:43:17.300166 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.300519 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:17.300541 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.300762 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:43:17.300963 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:17.301163 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:17.301337 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:43:17.301607 1218338 main.go:141] libmachine: Using SSH client type: native
	I0414 14:43:17.301831 1218338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:43:17.301850 1218338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:43:17.411863 1218338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:43:17.411929 1218338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:43:17.411985 1218338 buildroot.go:174] setting up certificates
	I0414 14:43:17.411999 1218338 provision.go:84] configureAuth start
	I0414 14:43:17.412011 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:43:17.412336 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:43:17.414927 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.415376 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:17.415421 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.415590 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:43:17.417818 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.418252 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:17.418278 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:17.418429 1218338 provision.go:143] copyHostCerts
	I0414 14:43:17.418468 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:43:17.418500 1218338 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:43:17.418509 1218338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:43:17.418572 1218338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:43:17.418663 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:43:17.418680 1218338 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:43:17.418687 1218338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:43:17.418710 1218338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:43:17.418825 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:43:17.418846 1218338 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:43:17.418850 1218338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:43:17.418875 1218338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:43:17.418943 1218338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:43:18.254475 1218338 provision.go:177] copyRemoteCerts
	I0414 14:43:18.254562 1218338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:43:18.254594 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:43:18.257284 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.257640 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:18.257664 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.257814 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:43:18.258009 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:18.258186 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:43:18.258340 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:43:18.341300 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:43:18.341387 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:43:18.364783 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:43:18.364868 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:43:18.387520 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:43:18.387587 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:43:18.410332 1218338 provision.go:87] duration metric: took 998.314023ms to configureAuth
	I0414 14:43:18.410371 1218338 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:43:18.410644 1218338 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:43:18.410664 1218338 machine.go:96] duration metric: took 1.342180249s to provisionDockerMachine
	I0414 14:43:18.410676 1218338 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:43:18.410693 1218338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:43:18.410730 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:43:18.411049 1218338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:43:18.411080 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:43:18.414369 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.414811 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:18.414857 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.415064 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:43:18.415266 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:18.415451 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:43:18.415611 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:43:18.498703 1218338 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:43:18.502691 1218338 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:43:18.502719 1218338 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:43:18.502797 1218338 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:43:18.502870 1218338 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:43:18.502879 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:43:18.503019 1218338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:43:18.512126 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:43:18.533276 1218338 start.go:296] duration metric: took 122.582762ms for postStartSetup
	I0414 14:43:18.533322 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:43:18.533632 1218338 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:43:18.533664 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:43:18.536486 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.536870 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:18.536899 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.537075 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:43:18.537285 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:18.537421 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:43:18.537564 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:43:18.617778 1218338 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:43:18.617873 1218338 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:43:18.653377 1218338 fix.go:56] duration metric: took 14.002904046s for fixHost
	I0414 14:43:18.653438 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:43:18.656627 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.657075 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:18.657119 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.657291 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:43:18.657507 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:18.657705 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:18.657889 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:43:18.658139 1218338 main.go:141] libmachine: Using SSH client type: native
	I0414 14:43:18.658368 1218338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:43:18.658380 1218338 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:43:18.764016 1218338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744641798.723290276
	
	I0414 14:43:18.764040 1218338 fix.go:216] guest clock: 1744641798.723290276
	I0414 14:43:18.764047 1218338 fix.go:229] Guest: 2025-04-14 14:43:18.723290276 +0000 UTC Remote: 2025-04-14 14:43:18.653405465 +0000 UTC m=+14.069454943 (delta=69.884811ms)
	I0414 14:43:18.764080 1218338 fix.go:200] guest clock delta is within tolerance: 69.884811ms
	I0414 14:43:18.764086 1218338 start.go:83] releasing machines lock for "ha-290859-m02", held for 14.11364342s
	I0414 14:43:18.764107 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:43:18.764505 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:43:18.767315 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.767787 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:18.767817 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.768000 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:43:18.768497 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:43:18.768687 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:43:18.768790 1218338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:43:18.768841 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:43:18.768966 1218338 ssh_runner.go:195] Run: systemctl --version
	I0414 14:43:18.768999 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:43:18.772089 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.772388 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.772576 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:18.772604 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.772738 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:43:18.772879 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:18.772916 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:18.772940 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:18.773152 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:43:18.773160 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:43:18.773367 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:43:18.773402 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:43:18.773529 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:43:18.773640 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:43:18.853637 1218338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:43:18.880220 1218338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:43:18.880302 1218338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:43:18.896482 1218338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:43:18.896518 1218338 start.go:495] detecting cgroup driver to use...
	I0414 14:43:18.896611 1218338 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:43:18.924234 1218338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:43:18.936976 1218338 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:43:18.937027 1218338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:43:18.950926 1218338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:43:18.963540 1218338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:43:19.075727 1218338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:43:19.219176 1218338 docker.go:233] disabling docker service ...
	I0414 14:43:19.219282 1218338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:43:19.233471 1218338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:43:19.247350 1218338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:43:19.383935 1218338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:43:19.519329 1218338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:43:19.537595 1218338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:43:19.555241 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:43:19.564683 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:43:19.574882 1218338 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:43:19.574983 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:43:19.584168 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:43:19.593483 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:43:19.602495 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:43:19.611477 1218338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:43:19.620843 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:43:19.630692 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:43:19.640368 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:43:19.650247 1218338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:43:19.659060 1218338 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:43:19.659112 1218338 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:43:19.670671 1218338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:43:19.679947 1218338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:43:19.789643 1218338 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:43:19.817223 1218338 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:43:19.817343 1218338 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:43:19.821964 1218338 retry.go:31] will retry after 1.140799605s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:43:20.963366 1218338 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:43:20.968958 1218338 start.go:563] Will wait 60s for crictl version
	I0414 14:43:20.969028 1218338 ssh_runner.go:195] Run: which crictl
	I0414 14:43:20.973211 1218338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:43:21.014989 1218338 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:43:21.015080 1218338 ssh_runner.go:195] Run: containerd --version
	I0414 14:43:21.040994 1218338 ssh_runner.go:195] Run: containerd --version
	I0414 14:43:21.066310 1218338 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:43:21.067828 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:43:21.070588 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:21.071009 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:43:21.071042 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:43:21.071289 1218338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:43:21.075369 1218338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:43:21.087007 1218338 mustload.go:65] Loading cluster: ha-290859
	I0414 14:43:21.087242 1218338 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:43:21.087528 1218338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:43:21.087570 1218338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:43:21.103296 1218338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
	I0414 14:43:21.103781 1218338 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:43:21.104282 1218338 main.go:141] libmachine: Using API Version  1
	I0414 14:43:21.104310 1218338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:43:21.104652 1218338 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:43:21.104831 1218338 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:43:21.106301 1218338 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:43:21.106620 1218338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:43:21.106658 1218338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:43:21.121923 1218338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
	I0414 14:43:21.122360 1218338 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:43:21.122757 1218338 main.go:141] libmachine: Using API Version  1
	I0414 14:43:21.122776 1218338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:43:21.123145 1218338 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:43:21.123377 1218338 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:43:21.123560 1218338 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:43:21.123574 1218338 certs.go:194] generating shared ca certs ...
	I0414 14:43:21.123591 1218338 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:43:21.123760 1218338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:43:21.123826 1218338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:43:21.123843 1218338 certs.go:256] generating profile certs ...
	I0414 14:43:21.123948 1218338 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:43:21.124101 1218338 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:43:21.124205 1218338 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:43:21.124221 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:43:21.124242 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:43:21.124260 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:43:21.124277 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:43:21.124292 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:43:21.124317 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:43:21.124336 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:43:21.124363 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:43:21.124445 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:43:21.124489 1218338 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:43:21.124504 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:43:21.124538 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:43:21.124565 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:43:21.124604 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:43:21.124656 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:43:21.124697 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:43:21.124717 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:43:21.124733 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:43:21.124765 1218338 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:43:21.127844 1218338 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:43:21.128444 1218338 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:43:21.128467 1218338 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:43:21.128570 1218338 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:43:21.128727 1218338 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:43:21.128927 1218338 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:43:21.129066 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:43:21.203640 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:43:21.208444 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:43:21.218431 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:43:21.222260 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:43:21.232210 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:43:21.238516 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:43:21.251778 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:43:21.256194 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:43:21.266052 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:43:21.269738 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:43:21.280173 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:43:21.284189 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:43:21.294685 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:43:21.318963 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:43:21.341043 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:43:21.362799 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:43:21.385596 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:43:21.410567 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:43:21.436704 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:43:21.460821 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:43:21.482923 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:43:21.504682 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:43:21.527747 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:43:21.550240 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:43:21.565923 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:43:21.581596 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:43:21.597754 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:43:21.612919 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:43:21.630203 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:43:21.646262 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:43:21.662432 1218338 ssh_runner.go:195] Run: openssl version
	I0414 14:43:21.667747 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:43:21.678227 1218338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:43:21.682473 1218338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:43:21.682522 1218338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:43:21.687984 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:43:21.698274 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:43:21.708457 1218338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:43:21.712582 1218338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:43:21.712643 1218338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:43:21.717764 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:43:21.727364 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:43:21.736954 1218338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:43:21.740819 1218338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:43:21.740875 1218338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:43:21.746101 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:43:21.756453 1218338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:43:21.760314 1218338 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:43:21.760364 1218338 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:43:21.760467 1218338 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:43:21.760491 1218338 kube-vip.go:115] generating kube-vip config ...
	I0414 14:43:21.760524 1218338 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:43:21.776851 1218338 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:43:21.776993 1218338 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:43:21.777061 1218338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:43:21.787380 1218338 binaries.go:47] Didn't find k8s binaries: didn't find preexisting kubeadm
	Initiating transfer...
	I0414 14:43:21.787430 1218338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:43:21.796684 1218338 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:43:21.796708 1218338 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0414 14:43:21.796747 1218338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:43:21.796712 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:43:21.796686 1218338 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0414 14:43:21.796859 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0414 14:43:21.796905 1218338 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:43:21.796964 1218338 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0414 14:43:21.800792 1218338 ssh_runner.go:356] copy: skipping /var/lib/minikube/binaries/v1.32.2/kubectl (exists)
	I0414 14:43:21.813606 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:43:21.813684 1218338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0414 14:43:21.813696 1218338 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:43:21.813725 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0414 14:43:21.829252 1218338 ssh_runner.go:356] copy: skipping /var/lib/minikube/binaries/v1.32.2/kubelet (exists)
	I0414 14:43:22.206389 1218338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0414 14:43:22.215841 1218338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0414 14:43:22.231674 1218338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:43:22.247180 1218338 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:43:22.262773 1218338 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:43:22.266269 1218338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:43:22.277201 1218338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:43:22.383541 1218338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:43:22.400665 1218338 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:43:22.400805 1218338 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:43:22.400994 1218338 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:43:22.402588 1218338 out.go:177] * Verifying Kubernetes components...
	I0414 14:43:22.402588 1218338 out.go:177] * Enabled addons: 
	I0414 14:43:22.404228 1218338 addons.go:514] duration metric: took 3.436827ms for enable addons: enabled=[]
	I0414 14:43:22.404273 1218338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:43:22.560791 1218338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:43:22.577076 1218338 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:43:22.577330 1218338 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0414 14:43:22.577452 1218338 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I0414 14:43:22.577996 1218338 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:43:22.578028 1218338 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:43:22.578036 1218338 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:43:22.578043 1218338 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:43:22.578053 1218338 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:43:22.578501 1218338 node_ready.go:35] waiting up to 6m0s for node "ha-290859-m02" to be "Ready" ...
	I0414 14:43:22.578621 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:22.578633 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:22.578644 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:22.578650 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:22.587187 1218338 round_trippers.go:581] Response Status: 404 Not Found in 8 milliseconds
	I0414 14:43:23.079083 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:23.079111 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:23.079132 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:23.079141 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:23.081572 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:23.579387 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:23.579416 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:23.579427 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:23.579432 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:23.581494 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:24.079409 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:24.079462 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:24.079474 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:24.079480 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:24.082289 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:24.579098 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:24.579122 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:24.579132 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:24.579138 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:24.581936 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:24.582034 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:25.079105 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:25.079138 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:25.079151 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:25.079169 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:25.082256 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:43:25.579031 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:25.579061 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:25.579073 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:25.579081 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:25.581956 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:26.079804 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:26.079845 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:26.079857 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:26.079866 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:26.082153 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:26.578894 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:26.578918 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:26.578926 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:26.578931 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:26.581316 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:27.079205 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:27.079234 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:27.079273 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:27.079281 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:27.081678 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:27.081787 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:27.579587 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:27.579626 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:27.579640 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:27.579650 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:27.581685 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:28.079514 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:28.079545 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:28.079558 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:28.079566 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:28.081933 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:28.579777 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:28.579826 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:28.579840 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:28.579849 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:28.582582 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:29.079390 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:29.079420 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:29.079432 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:29.079438 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:29.081944 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:29.082039 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:29.578754 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:29.578782 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:29.578791 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:29.578795 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:29.581380 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:30.079486 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:30.079513 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:30.079524 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:30.079530 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:30.081864 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:30.579588 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:30.579614 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:30.579623 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:30.579628 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:30.582354 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:31.079208 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:31.079239 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:31.079269 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:31.079277 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:31.081611 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:31.579118 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:31.579146 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:31.579157 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:31.579162 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:31.581776 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:31.581867 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:32.079542 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:32.079568 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:32.079577 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:32.079582 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:32.082093 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:32.578853 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:32.578878 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:32.578886 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:32.578892 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:32.581294 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:33.079033 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:33.079061 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:33.079069 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:33.079075 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:33.081525 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:33.579237 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:33.579291 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:33.579301 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:33.579308 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:33.581843 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:33.581970 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:34.079604 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:34.079690 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:34.079704 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:34.079711 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:34.082676 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:34.579435 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:34.579466 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:34.579478 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:34.579484 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:34.581810 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:35.078863 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:35.078888 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:35.078906 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:35.078913 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:35.081289 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:35.579156 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:35.579188 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:35.579200 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:35.579216 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:35.582113 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:35.582239 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:36.079017 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:36.079048 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:36.079058 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:36.079062 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:36.081788 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:36.579731 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:36.579757 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:36.579766 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:36.579770 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:36.582201 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:37.079041 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:37.079066 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:37.079075 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:37.079079 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:37.081587 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:37.579586 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:37.579612 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:37.579621 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:37.579626 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:37.581966 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:38.078791 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:38.078815 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:38.078846 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:38.078850 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:38.081841 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:38.081938 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:38.578700 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:38.578727 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:38.578741 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:38.578746 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:38.582013 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:43:39.078858 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:39.078884 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:39.078896 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:39.078902 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:39.081941 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:43:39.578824 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:39.578846 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:39.578861 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:39.578865 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:39.581477 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:40.079387 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:40.079412 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:40.079421 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:40.079425 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:40.082038 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:40.082199 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:40.578997 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:40.579029 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:40.579041 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:40.579050 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:40.581547 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:41.079465 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:41.079491 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:41.079500 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:41.079504 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:41.082129 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:41.578843 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:41.578871 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:41.578879 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:41.578884 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:41.581232 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:42.078964 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:42.078990 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:42.078998 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:42.079003 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:42.081375 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:42.579230 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:42.579267 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:42.579276 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:42.579281 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:42.581682 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:42.581773 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:43.079465 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:43.079493 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:43.079502 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:43.079506 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:43.081928 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:43.579747 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:43.579774 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:43.579784 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:43.579790 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:43.582368 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:44.079125 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:44.079154 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:44.079166 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:44.079174 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:44.081421 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:44.579238 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:44.579290 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:44.579302 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:44.579307 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:44.581981 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:44.582166 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:45.079075 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:45.079100 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:45.079108 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:45.079113 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:45.081931 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:45.579678 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:45.579703 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:45.579711 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:45.579716 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:45.582532 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:46.079284 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:46.079316 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:46.079325 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:46.079331 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:46.082156 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:46.579052 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:46.579076 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:46.579084 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:46.579089 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:46.581465 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:47.079163 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:47.079185 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:47.079194 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:47.079198 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:47.081719 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:47.081805 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:47.579612 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:47.579661 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:47.579674 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:47.579680 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:47.582384 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:48.079096 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:48.079120 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:48.079129 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:48.079134 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:48.081415 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:48.579138 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:48.579165 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:48.579174 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:48.579178 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:48.581799 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:49.079347 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:49.079371 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:49.079380 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:49.079386 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:49.081840 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:49.081952 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:49.579574 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:49.579604 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:49.579613 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:49.579618 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:49.582127 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:50.079065 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:50.079095 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:50.079107 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:50.079112 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:50.081263 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:50.578959 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:50.578985 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:50.578995 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:50.579000 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:50.581465 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:51.079133 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:51.079154 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:51.079172 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:51.079176 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:51.081639 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:51.579464 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:51.579490 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:51.579499 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:51.579503 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:51.582237 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:51.582346 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:52.078981 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:52.079011 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:52.079019 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:52.079024 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:52.081326 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:52.579242 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:52.579297 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:52.579311 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:52.579318 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:52.581841 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:53.079751 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:53.079779 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:53.079790 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:53.079796 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:53.082207 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:53.579028 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:53.579058 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:53.579068 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:53.579073 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:53.581502 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:54.079522 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:54.079550 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:54.079559 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:54.079564 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:54.082211 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:54.082299 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:54.579101 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:54.579130 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:54.579141 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:54.579146 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:54.581963 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:55.079000 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:55.079025 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:55.079034 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:55.079038 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:55.081711 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:55.579631 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:55.579661 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:55.579673 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:55.579681 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:55.582267 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:56.079098 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:56.079121 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:56.079131 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:56.079134 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:56.081794 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:56.579747 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:56.579776 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:56.579789 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:56.579797 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:56.581895 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:56.581984 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:57.079664 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:57.079689 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:57.079699 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:57.079703 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:57.083498 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:43:57.579230 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:57.579294 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:57.579305 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:57.579312 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:57.581617 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:58.079439 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:58.079467 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:58.079476 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:58.079480 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:58.081717 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:58.579507 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:58.579532 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:58.579541 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:58.579552 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:58.582104 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:58.582220 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:43:59.078765 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:59.078789 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:59.078798 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:59.078803 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:59.081059 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:43:59.578779 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:43:59.578802 1218338 round_trippers.go:476] Request Headers:
	I0414 14:43:59.578811 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:43:59.578815 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:43:59.581407 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:00.079393 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:00.079416 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:00.079424 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:00.079428 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:00.081556 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:00.579407 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:00.579438 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:00.579452 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:00.579460 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:00.582141 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:01.078885 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:01.078910 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:01.078919 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:01.078923 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:01.080902 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:44:01.081009 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:01.579620 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:01.579646 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:01.579658 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:01.579664 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:01.582267 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:02.079010 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:02.079034 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:02.079043 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:02.079046 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:02.081075 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:02.578979 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:02.579039 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:02.579053 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:02.579062 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:02.581262 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:03.079000 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:03.079031 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:03.079044 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:03.079050 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:03.081692 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:03.081781 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:03.579506 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:03.579534 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:03.579543 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:03.579557 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:03.581857 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:04.079639 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:04.079665 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:04.079676 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:04.079682 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:04.082019 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:04.578781 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:04.578806 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:04.578817 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:04.578824 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:04.581130 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:05.079724 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:05.079747 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:05.079756 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:05.079760 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:05.081707 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:44:05.081808 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:05.579511 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:05.579546 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:05.579558 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:05.579565 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:05.581853 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:06.079660 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:06.079689 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:06.079701 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:06.079707 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:06.081787 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:06.579775 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:06.579807 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:06.579820 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:06.579828 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:06.582067 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:07.078873 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:07.078904 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:07.078914 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:07.078920 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:07.081414 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:07.579204 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:07.579238 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:07.579272 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:07.579282 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:07.581531 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:07.581638 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:08.079216 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:08.079243 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:08.079275 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:08.079283 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:08.081799 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:08.579754 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:08.579782 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:08.579796 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:08.579830 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:08.582260 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:09.079064 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:09.079087 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:09.079095 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:09.079100 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:09.081583 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:09.579487 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:09.579516 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:09.579527 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:09.579534 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:09.581722 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:09.581800 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:10.079748 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:10.079772 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:10.079780 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:10.079785 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:10.082422 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:10.579351 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:10.579377 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:10.579387 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:10.579392 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:10.581776 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:11.078709 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:11.078735 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:11.078748 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:11.078755 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:11.081167 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:11.578920 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:11.578944 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:11.578953 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:11.578958 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:11.581339 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:12.079176 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:12.079202 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:12.079211 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:12.079215 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:12.081634 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:12.081717 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:12.579484 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:12.579513 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:12.579526 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:12.579532 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:12.581711 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:13.079617 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:13.079641 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:13.079650 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:13.079654 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:13.082000 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:13.578843 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:13.578875 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:13.578888 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:13.578895 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:13.581189 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:14.079049 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:14.079078 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:14.079087 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:14.079092 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:14.081328 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:14.579144 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:14.579179 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:14.579193 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:14.579198 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:14.581676 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:14.581793 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:15.079665 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:15.079691 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:15.079702 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:15.079708 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:15.082111 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:15.578931 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:15.578955 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:15.578964 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:15.578968 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:15.581068 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:16.078898 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:16.078925 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:16.078933 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:16.078939 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:16.081177 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:16.578878 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:16.578901 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:16.578911 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:16.578914 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:16.581275 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:17.078995 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:17.079021 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:17.079029 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:17.079035 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:17.081176 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:17.081294 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:17.578922 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:17.578954 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:17.578964 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:17.578970 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:17.581153 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:18.078859 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:18.078887 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:18.078895 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:18.078900 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:18.081261 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:18.578963 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:18.578987 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:18.579016 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:18.579024 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:18.581283 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:19.079026 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:19.079051 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:19.079062 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:19.079069 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:19.081227 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:19.081332 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:19.578879 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:19.578901 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:19.578911 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:19.578915 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:19.581378 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:20.079441 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:20.079472 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:20.079487 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:20.079496 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:20.081871 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:20.579658 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:20.579685 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:20.579697 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:20.579703 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:20.582109 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:21.078816 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:21.078845 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:21.078857 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:21.078862 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:21.081025 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:21.578697 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:21.578722 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:21.578730 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:21.578735 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:21.581175 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:21.581267 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:22.079011 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:22.079039 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:22.079050 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:22.079057 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:22.081173 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:22.578957 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:22.578986 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:22.578996 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:22.579004 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:22.581446 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:23.079443 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:23.079483 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:23.079496 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:23.079504 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:23.081822 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:23.578726 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:23.578750 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:23.578760 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:23.578765 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:23.580898 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:24.078744 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:24.078774 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:24.078784 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:24.078792 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:24.081371 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:24.081454 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:24.579239 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:24.579286 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:24.579299 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:24.579306 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:24.581800 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:25.078770 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:25.078794 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:25.078820 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:25.078828 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:25.081230 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:25.579056 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:25.579133 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:25.579143 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:25.579147 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:25.581419 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:26.079273 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:26.079299 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:26.079307 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:26.079312 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:26.082004 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:26.082108 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:26.579791 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:26.579814 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:26.579823 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:26.579827 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:26.582110 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:27.078788 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:27.078812 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:27.078821 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:27.078825 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:27.081353 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:27.579084 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:27.579108 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:27.579117 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:27.579123 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:27.581569 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:28.079325 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:28.079349 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:28.079359 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:28.079363 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:28.081741 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:28.579531 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:28.579557 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:28.579565 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:28.579570 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:28.582642 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:44:28.582727 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:29.079451 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:29.079478 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:29.079486 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:29.079492 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:29.081618 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:29.579408 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:29.579430 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:29.579444 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:29.579448 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:29.582042 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:30.078839 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:30.078862 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:30.078870 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:30.078875 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:30.081093 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:30.578842 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:30.578867 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:30.578875 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:30.578880 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:30.581316 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:31.079050 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:31.079078 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:31.079087 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:31.079092 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:31.081888 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:31.082009 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:31.579477 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:31.579502 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:31.579511 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:31.579516 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:31.581776 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:32.079629 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:32.079658 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:32.079667 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:32.079673 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:32.081880 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:32.580035 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:32.580316 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:32.580367 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:32.580385 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:32.583693 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:44:33.079518 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:33.079549 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:33.079562 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:33.079571 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:33.081843 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:33.579710 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:33.579741 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:33.579755 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:33.579759 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:33.582572 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:33.582653 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:34.079368 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:34.079391 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:34.079400 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:34.079403 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:34.082216 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:34.578888 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:34.578912 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:34.578920 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:34.578926 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:34.581214 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:35.079378 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:35.079401 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:35.079410 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:35.079417 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:35.082898 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:44:35.579679 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:35.579708 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:35.579721 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:35.579728 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:35.582194 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:36.078871 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:36.078899 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:36.078911 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:36.078916 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:36.081218 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:36.081304 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:36.579092 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:36.579118 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:36.579127 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:36.579131 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:36.581296 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:37.079005 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:37.079029 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:37.079037 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:37.079044 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:37.081402 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:37.579094 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:37.579124 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:37.579137 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:37.579144 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:37.581486 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:38.079366 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:38.079399 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:38.079427 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:38.079434 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:38.081843 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:38.081943 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:38.579620 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:38.579650 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:38.579662 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:38.579668 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:38.582642 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:39.079413 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:39.079444 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:39.079454 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:39.079460 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:39.082460 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:39.579210 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:39.579243 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:39.579282 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:39.579290 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:39.581628 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:40.079720 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:40.079744 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:40.079753 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:40.079759 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:40.081960 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:40.082039 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:40.579790 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:40.579818 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:40.579826 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:40.579830 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:40.582504 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:41.079223 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:41.079263 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:41.079272 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:41.079276 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:41.081636 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:41.579389 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:41.579414 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:41.579423 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:41.579427 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:41.582029 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:42.078772 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:42.078798 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:42.078807 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:42.078811 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:42.081134 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:42.578976 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:42.579003 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:42.579019 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:42.579026 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:42.581249 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:42.581333 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:43.078986 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:43.079009 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:43.079018 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:43.079023 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:43.081563 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:43.579350 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:43.579376 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:43.579385 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:43.579390 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:43.581775 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:44.079571 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:44.079597 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:44.079605 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:44.079609 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:44.082001 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:44.578701 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:44.578727 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:44.578737 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:44.578745 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:44.581055 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:45.078970 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:45.078996 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:45.079005 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:45.079009 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:45.080990 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:44:45.081099 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:45.579806 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:45.579836 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:45.579848 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:45.579857 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:45.582116 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:46.078847 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:46.078875 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:46.078883 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:46.078887 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:46.081413 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:46.579181 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:46.579211 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:46.579220 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:46.579224 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:46.581585 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:47.079446 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:47.079471 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:47.079479 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:47.079484 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:47.081841 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:47.081937 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:47.579606 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:47.579630 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:47.579638 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:47.579643 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:47.581974 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:48.079716 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:48.079741 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:48.079753 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:48.079759 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:48.082090 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:48.578797 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:48.578826 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:48.578837 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:48.578843 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:48.581124 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:49.078839 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:49.078864 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:49.078874 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:49.078880 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:49.081192 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:49.578861 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:49.578886 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:49.578897 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:49.578901 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:49.581412 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:49.581621 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:50.079395 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:50.079497 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:50.079513 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:50.079519 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:50.084014 1218338 round_trippers.go:581] Response Status: 404 Not Found in 4 milliseconds
	I0414 14:44:50.578740 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:50.578766 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:50.578775 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:50.578780 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:50.581202 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:51.078916 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:51.078941 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:51.078950 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:51.078955 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:51.081289 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:51.578975 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:51.579001 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:51.579011 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:51.579015 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:51.581819 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:51.581914 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:52.079612 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:52.079643 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:52.079655 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:52.079664 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:52.082050 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:52.578920 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:52.578950 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:52.578959 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:52.578986 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:52.581282 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:53.079005 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:53.079031 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:53.079039 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:53.079044 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:53.081224 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:53.578959 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:53.579011 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:53.579022 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:53.579028 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:53.581154 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:54.078856 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:54.078882 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:54.078892 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:54.078898 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:54.081640 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:54.081730 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:54.579462 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:54.579491 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:54.579500 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:54.579504 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:54.582441 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:55.079596 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:55.079623 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:55.079633 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:55.079642 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:55.081954 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:55.579740 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:55.579766 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:55.579777 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:55.579784 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:55.581983 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:56.079728 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:56.079753 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:56.079765 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:56.079771 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:56.082229 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:56.082333 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:56.578837 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:56.578864 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:56.578876 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:56.578882 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:56.581550 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:57.079315 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:57.079346 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:57.079356 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:57.079362 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:57.081705 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:57.579583 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:57.579614 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:57.579623 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:57.579629 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:57.582669 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:44:58.079497 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:58.079524 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:58.079533 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:58.079538 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:58.081727 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:58.579445 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:58.579470 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:58.579483 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:58.579488 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:58.581977 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:58.582093 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:44:59.078717 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:59.078743 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:59.078755 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:59.078760 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:59.081170 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:44:59.578849 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:44:59.578871 1218338 round_trippers.go:476] Request Headers:
	I0414 14:44:59.578879 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:44:59.578884 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:44:59.581365 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:00.079386 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:00.079409 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:00.079419 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:00.079425 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:00.081674 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:00.579509 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:00.579612 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:00.579638 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:00.579646 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:00.582215 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:00.582370 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:01.078905 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:01.078931 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:01.078940 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:01.078944 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:01.081352 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:01.579387 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:01.579415 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:01.579425 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:01.579430 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:01.582052 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:02.078751 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:02.078778 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:02.078788 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:02.078793 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:02.080744 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:45:02.579599 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:02.579621 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:02.579629 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:02.579634 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:02.582039 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:03.078748 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:03.078773 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:03.078782 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:03.078785 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:03.081403 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:03.081517 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:03.579213 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:03.579241 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:03.579274 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:03.579283 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:03.581468 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:04.079062 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:04.079085 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:04.079093 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:04.079097 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:04.080979 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:45:04.578749 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:04.578774 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:04.578783 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:04.578787 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:04.581610 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:05.079520 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:05.079544 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:05.079554 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:05.079559 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:05.081822 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:05.081903 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:05.579620 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:05.579644 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:05.579652 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:05.579656 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:05.581960 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:06.079692 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:06.079714 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:06.079729 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:06.079734 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:06.081682 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:45:06.579412 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:06.579436 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:06.579444 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:06.579450 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:06.581704 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:07.079061 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:07.079085 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:07.079116 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:07.079121 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:07.081921 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:07.082015 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:07.579302 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:07.579329 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:07.579337 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:07.579342 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:07.581902 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:08.079665 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:08.079690 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:08.079699 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:08.079703 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:08.082224 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:08.578940 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:08.578964 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:08.578973 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:08.578977 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:08.581559 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:09.079392 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:09.079418 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:09.079427 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:09.079432 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:09.081596 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:09.579388 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:09.579417 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:09.579430 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:09.579439 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:09.582142 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:09.582229 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:10.078898 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:10.078921 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:10.078931 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:10.078934 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:10.081677 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:10.579467 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:10.579494 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:10.579502 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:10.579512 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:10.582156 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:11.078858 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:11.078887 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:11.078905 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:11.078909 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:11.081245 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:11.578893 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:11.578920 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:11.578929 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:11.578933 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:11.581344 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:12.079028 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:12.079053 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:12.079062 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:12.079066 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:12.081499 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:12.081602 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:12.579473 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:12.579501 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:12.579513 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:12.579521 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:12.585019 1218338 round_trippers.go:581] Response Status: 404 Not Found in 5 milliseconds
	I0414 14:45:13.078731 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:13.078754 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:13.078763 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:13.078767 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:13.081183 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:13.578902 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:13.578931 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:13.578958 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:13.578963 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:13.581183 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:14.078893 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:14.078918 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:14.078927 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:14.078931 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:14.081425 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:14.579129 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:14.579154 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:14.579167 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:14.579173 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:14.581658 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:14.581744 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:15.079725 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:15.079748 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:15.079757 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:15.079761 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:15.082141 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:15.578833 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:15.578858 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:15.578867 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:15.578870 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:15.581476 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:16.079180 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:16.079206 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:16.079214 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:16.079220 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:16.081680 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:16.579294 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:16.579318 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:16.579330 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:16.579338 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:16.581855 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:16.581965 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:17.079615 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:17.079641 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:17.079650 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:17.079655 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:17.082045 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:17.578760 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:17.578783 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:17.578791 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:17.578805 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:17.581428 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:18.079123 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:18.079149 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:18.079162 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:18.079167 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:18.081416 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:18.579129 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:18.579156 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:18.579167 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:18.579173 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:18.581634 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:19.079390 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:19.079417 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:19.079428 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:19.079433 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:19.081832 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:19.081925 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:19.579587 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:19.579613 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:19.579637 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:19.579644 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:19.581903 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:20.078941 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:20.078964 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:20.078973 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:20.078977 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:20.081439 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:20.579141 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:20.579171 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:20.579184 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:20.579192 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:20.581851 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:21.079651 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:21.079678 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:21.079688 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:21.079693 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:21.081986 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:21.082074 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:21.578843 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:21.578868 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:21.578877 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:21.578882 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:21.581378 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:22.079080 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:22.079105 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:22.079113 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:22.079118 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:22.081386 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:22.579355 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:22.579378 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:22.579387 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:22.579392 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:22.581697 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:23.079424 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:23.079455 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:23.079467 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:23.079473 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:23.081832 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:23.579608 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:23.579633 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:23.579643 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:23.579650 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:23.582061 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:23.582160 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:24.078760 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:24.078783 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:24.078791 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:24.078799 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:24.081871 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:45:24.579667 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:24.579699 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:24.579711 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:24.579719 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:24.582082 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:25.079067 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:25.079090 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:25.079099 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:25.079104 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:25.081401 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:25.579088 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:25.579135 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:25.579144 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:25.579149 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:25.582163 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:25.582288 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:26.078840 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:26.078865 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:26.078877 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:26.078885 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:26.081398 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:26.579158 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:26.579190 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:26.579204 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:26.579210 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:26.582046 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:27.078717 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:27.078742 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:27.078751 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:27.078757 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:27.081530 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:27.579280 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:27.579304 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:27.579315 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:27.579337 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:27.581528 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:28.078761 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:28.078787 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:28.078796 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:28.078801 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:28.082406 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:45:28.082518 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:28.579122 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:28.579146 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:28.579155 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:28.579159 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:28.581509 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:29.079220 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:29.079249 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:29.079286 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:29.079294 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:29.081303 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:45:29.579373 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:29.579396 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:29.579405 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:29.579409 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:29.583327 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:45:30.079391 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:30.079423 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:30.079443 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:30.079450 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:30.082014 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:30.578729 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:30.578755 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:30.578766 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:30.578771 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:30.582605 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:45:30.582724 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:31.079409 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:31.079434 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:31.079443 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:31.079448 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:31.081773 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:31.579648 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:31.579675 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:31.579684 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:31.579690 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:31.582165 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:32.079604 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:32.079628 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:32.079637 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:32.079643 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:32.082773 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:45:32.579630 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:32.579653 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:32.579661 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:32.579666 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:32.582236 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:33.078952 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:33.078975 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:33.078984 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:33.078987 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:33.081515 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:33.081612 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:33.579243 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:33.579285 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:33.579294 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:33.579298 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:33.582168 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:34.078876 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:34.078900 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:34.078912 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:34.078918 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:34.081469 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:34.579176 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:34.579200 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:34.579212 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:34.579226 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:34.581966 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:35.079011 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:35.079037 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:35.079045 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:35.079048 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:35.081854 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:35.081969 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:35.579653 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:35.579680 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:35.579688 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:35.579692 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:35.582634 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:36.079465 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:36.079505 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:36.079518 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:36.079525 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:36.082232 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:36.578985 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:36.579012 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:36.579021 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:36.579025 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:36.581719 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:37.079529 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:37.079554 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:37.079563 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:37.079568 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:37.082519 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:37.082616 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:37.579315 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:37.579340 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:37.579349 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:37.579353 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:37.582715 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:45:38.079441 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:38.079467 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:38.079475 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:38.079481 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:38.082843 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:45:38.579627 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:38.579659 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:38.579674 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:38.579681 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:38.582578 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:39.079481 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:39.079515 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:39.079528 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:39.079538 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:39.082190 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:39.578870 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:39.578894 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:39.578905 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:39.578911 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:39.581160 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:39.581263 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:40.079380 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:40.079406 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:40.079415 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:40.079418 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:40.081876 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:40.579769 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:40.579795 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:40.579811 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:40.579816 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:40.582505 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:41.079231 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:41.079278 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:41.079287 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:41.079293 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:41.081591 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:41.579354 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:41.579379 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:41.579389 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:41.579393 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:41.581850 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:41.581964 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:42.079676 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:42.079702 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:42.079714 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:42.079717 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:42.082019 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:42.578954 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:42.578978 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:42.578986 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:42.578990 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:42.581567 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:43.079357 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:43.079381 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:43.079393 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:43.079399 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:43.081873 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:43.579683 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:43.579708 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:43.579721 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:43.579728 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:43.582317 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:43.582442 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:44.079001 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:44.079026 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:44.079035 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:44.079042 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:44.081466 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:44.579175 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:44.579199 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:44.579207 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:44.579211 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:44.582060 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:45.079070 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:45.079100 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:45.079109 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:45.079114 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:45.081517 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:45.578846 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:45.578871 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:45.578880 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:45.578884 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:45.581260 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:46.079043 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:46.079072 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:46.079081 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:46.079087 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:46.081668 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:46.081781 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:46.579558 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:46.579586 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:46.579595 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:46.579601 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:46.581993 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:47.078748 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:47.078772 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:47.078781 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:47.078784 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:47.081572 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:47.579206 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:47.579238 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:47.579274 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:47.579283 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:47.581855 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:48.079643 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:48.079669 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:48.079684 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:48.079688 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:48.082004 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:48.082098 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:48.578737 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:48.578772 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:48.578782 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:48.578786 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:48.581035 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:49.078729 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:49.078754 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:49.078762 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:49.078768 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:49.081037 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:49.578737 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:49.578762 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:49.578773 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:49.578777 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:49.581160 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:50.079247 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:50.079282 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:50.079291 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:50.079297 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:50.081683 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:50.579536 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:50.579567 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:50.579580 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:50.579587 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:50.582169 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:50.582259 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:51.078872 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:51.078904 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:51.078937 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:51.078946 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:51.081619 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:51.579478 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:51.579510 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:51.579521 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:51.579526 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:51.582304 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:52.079016 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:52.079048 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:52.079062 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:52.079069 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:52.081640 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:52.579431 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:52.579454 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:52.579463 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:52.579468 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:52.582094 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:53.078829 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:53.078852 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:53.078861 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:53.078866 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:53.081014 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:53.081094 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:53.578731 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:53.578760 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:53.578772 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:53.578777 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:53.581451 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:54.079202 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:54.079283 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:54.079297 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:54.079303 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:54.081344 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:54.579085 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:54.579110 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:54.579120 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:54.579126 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:54.581562 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:55.079630 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:55.079658 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:55.079667 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:55.079672 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:55.082071 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:55.082177 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:55.578792 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:55.578821 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:55.578834 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:55.578841 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:55.581614 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:56.079439 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:56.079468 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:56.079480 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:56.079487 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:56.082079 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:56.578744 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:56.578768 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:56.578777 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:56.578783 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:56.581460 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:57.079159 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:57.079195 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:57.079203 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:57.079209 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:57.081628 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:57.579411 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:57.579435 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:57.579444 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:57.579449 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:57.581739 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:57.581831 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:45:58.079577 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:58.079601 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:58.079609 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:58.079613 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:58.081689 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:58.579419 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:58.579445 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:58.579460 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:58.579465 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:58.582220 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:59.078945 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:59.078971 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:59.078979 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:59.078985 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:59.081071 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:45:59.578772 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:45:59.578796 1218338 round_trippers.go:476] Request Headers:
	I0414 14:45:59.578804 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:45:59.578809 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:45:59.581379 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:00.079377 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:00.079409 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:00.079422 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:00.079430 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:00.081918 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:00.082009 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:00.579731 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:00.579760 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:00.579772 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:00.579781 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:00.582112 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:01.078818 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:01.078844 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:01.078853 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:01.078857 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:01.081380 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:01.579172 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:01.579199 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:01.579207 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:01.579212 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:01.581640 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:02.079421 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:02.079445 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:02.079462 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:02.079466 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:02.081987 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:02.082086 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:02.578816 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:02.578837 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:02.578845 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:02.578851 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:02.581179 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:03.078831 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:03.078856 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:03.078866 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:03.078870 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:03.081070 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:03.578760 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:03.578785 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:03.578793 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:03.578799 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:03.581079 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:04.078804 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:04.078829 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:04.078838 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:04.078843 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:04.081621 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:04.579458 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:04.579484 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:04.579496 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:04.579503 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:04.581667 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:04.581776 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:05.079610 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:05.079633 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:05.079642 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:05.079647 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:05.081848 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:05.579624 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:05.579654 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:05.579667 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:05.579672 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:05.582125 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:06.078809 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:06.078833 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:06.078842 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:06.078849 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:06.081663 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:06.579504 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:06.579530 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:06.579539 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:06.579544 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:06.582207 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:06.582299 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:07.078952 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:07.078979 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:07.078987 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:07.078991 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:07.081420 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:07.579095 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:07.579115 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:07.579123 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:07.579126 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:07.581100 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:46:08.079446 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:08.079482 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:08.079497 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:08.079503 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:08.086034 1218338 round_trippers.go:581] Response Status: 404 Not Found in 6 milliseconds
	I0414 14:46:08.578766 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:08.578800 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:08.578815 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:08.578821 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:08.581247 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:09.079012 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:09.079038 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:09.079047 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:09.079050 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:09.082074 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:46:09.082157 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:09.578764 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:09.578788 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:09.578796 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:09.578800 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:09.581007 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:10.078925 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:10.078948 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:10.078957 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:10.078960 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:10.081183 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:10.578908 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:10.578932 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:10.578940 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:10.578946 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:10.581261 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:11.078982 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:11.079009 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:11.079017 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:11.079025 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:11.081376 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:11.579275 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:11.579300 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:11.579308 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:11.579317 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:11.582132 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:11.582213 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:12.078870 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:12.078896 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:12.078909 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:12.078913 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:12.081301 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:12.579074 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:12.579095 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:12.579103 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:12.579108 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:12.581572 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:13.079362 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:13.079387 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:13.079396 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:13.079400 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:13.081758 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:13.579532 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:13.579556 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:13.579564 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:13.579569 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:13.581809 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:14.079592 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:14.079614 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:14.079622 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:14.079625 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:14.081939 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:14.082019 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:14.579745 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:14.579767 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:14.579776 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:14.579780 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:14.582177 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:15.079356 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:15.079380 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:15.079389 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:15.079394 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:15.082393 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:15.579683 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:15.579705 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:15.579716 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:15.579722 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:15.582111 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:16.078825 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:16.078849 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:16.078858 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:16.078862 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:16.081059 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:16.578866 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:16.578890 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:16.578899 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:16.578904 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:16.581620 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:16.581711 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:17.079443 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:17.079467 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:17.079475 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:17.079480 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:17.082068 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:17.578777 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:17.578795 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:17.578803 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:17.578809 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:17.581158 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:18.078872 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:18.078897 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:18.078906 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:18.078909 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:18.081106 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:18.578795 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:18.578820 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:18.578829 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:18.578835 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:18.581366 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:19.079074 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:19.079099 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:19.079108 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:19.079112 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:19.081485 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:19.081574 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:19.579178 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:19.579202 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:19.579213 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:19.579219 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:19.581468 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:20.079648 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:20.079682 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:20.079695 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:20.079703 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:20.082112 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:20.578811 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:20.578834 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:20.578843 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:20.578849 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:20.581210 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:21.078981 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:21.079005 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:21.079014 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:21.079018 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:21.081412 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:21.579118 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:21.579143 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:21.579152 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:21.579155 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:21.581853 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:21.581953 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:22.079640 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:22.079664 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:22.079673 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:22.079678 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:22.081929 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:22.578720 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:22.578741 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:22.578749 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:22.578753 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:22.581247 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:23.079006 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:23.079033 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:23.079042 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:23.079046 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:23.081568 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:23.579444 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:23.579469 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:23.579479 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:23.579483 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:23.581636 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:24.079421 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:24.079446 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:24.079455 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:24.079462 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:24.082134 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:24.082259 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:24.578820 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:24.578842 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:24.578854 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:24.578859 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:24.581179 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:25.079311 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:25.079334 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:25.079342 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:25.079346 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:25.081849 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:25.579604 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:25.579631 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:25.579640 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:25.579645 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:25.581933 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:26.079737 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:26.079767 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:26.079778 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:26.079788 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:26.082491 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:26.082594 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:26.579382 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:26.579412 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:26.579426 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:26.579433 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:26.582213 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:27.078797 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:27.078823 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:27.078832 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:27.078836 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:27.081559 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:27.578841 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:27.578866 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:27.578875 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:27.578880 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:27.581554 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:28.079316 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:28.079355 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:28.079366 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:28.079370 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:28.081647 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:28.579478 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:28.579508 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:28.579520 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:28.579524 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:28.581935 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:28.582035 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:29.079745 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:29.079770 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:29.079779 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:29.079783 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:29.081806 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:29.579556 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:29.579580 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:29.579588 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:29.579593 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:29.582405 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:30.079635 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:30.079678 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:30.079693 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:30.079699 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:30.082126 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:30.578838 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:30.578866 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:30.578877 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:30.578883 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:30.581324 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:31.079087 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:31.079117 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:31.079130 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:31.079138 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:31.082301 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:46:31.082711 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:31.579484 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:31.579506 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:31.579515 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:31.579519 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:31.581904 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:32.079708 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:32.079736 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:32.079749 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:32.079755 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:32.082212 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:32.579044 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:32.579069 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:32.579078 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:32.579082 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:32.581299 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:33.079456 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:33.079492 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:33.079507 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:33.079512 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:33.082884 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:46:33.082992 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:33.579607 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:33.579648 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:33.579664 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:33.579678 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:33.582049 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:34.078906 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:34.078929 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:34.078937 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:34.078941 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:34.081554 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:34.579360 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:34.579391 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:34.579403 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:34.579411 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:34.581563 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:35.078950 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:35.078980 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:35.078990 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:35.078996 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:35.081202 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:35.578899 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:35.578923 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:35.578931 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:35.578936 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:35.581054 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:35.581195 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:36.078752 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:36.078777 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:36.078787 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:36.078792 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:36.081345 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:36.578952 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:36.578976 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:36.578986 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:36.578989 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:36.581259 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:37.078957 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:37.078986 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:37.078997 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:37.079002 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:37.081180 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:37.578927 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:37.578952 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:37.578960 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:37.578964 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:37.581461 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:37.581554 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:38.079166 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:38.079191 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:38.079199 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:38.079204 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:38.081811 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:38.579622 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:38.579647 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:38.579656 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:38.579664 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:38.581663 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:46:39.079468 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:39.079563 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:39.079585 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:39.079595 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:39.082161 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:39.578842 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:39.578867 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:39.578876 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:39.578881 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:39.581281 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:40.079285 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:40.079308 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:40.079317 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:40.079321 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:40.082103 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:40.082250 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:40.578968 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:40.578993 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:40.579002 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:40.579006 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:40.581322 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:41.079179 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:41.079204 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:41.079213 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:41.079217 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:41.081742 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:41.579419 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:41.579447 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:41.579458 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:41.579466 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:41.582050 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:42.078733 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:42.078758 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:42.078767 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:42.078772 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:42.081251 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:42.579068 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:42.579091 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:42.579100 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:42.579103 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:42.581362 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:42.581451 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:43.079076 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:43.079105 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:43.079118 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:43.079127 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:43.081532 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:43.579237 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:43.579278 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:43.579287 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:43.579290 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:43.581549 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:44.079360 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:44.079384 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:44.079392 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:44.079397 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:44.082428 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:46:44.579112 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:44.579139 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:44.579154 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:44.579161 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:44.581282 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:45.079275 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:45.079298 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:45.079307 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:45.079312 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:45.081654 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:45.081743 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:45.579435 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:45.579462 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:45.579471 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:45.579476 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:45.581843 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:46.079579 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:46.079600 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:46.079608 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:46.079612 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:46.081745 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:46.579474 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:46.579494 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:46.579502 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:46.579507 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:46.581721 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:47.079499 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:47.079521 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:47.079530 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:47.079533 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:47.081845 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:47.081944 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:47.579600 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:47.579625 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:47.579634 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:47.579639 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:47.582253 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:48.078953 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:48.078981 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:48.078991 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:48.078996 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:48.081360 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:48.579625 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:48.579648 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:48.579656 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:48.579660 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:48.581501 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:46:49.079215 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:49.079239 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:49.079248 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:49.079271 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:49.081572 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:49.579319 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:49.579342 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:49.579350 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:49.579356 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:49.581494 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:49.581570 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:50.079571 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:50.079593 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:50.079602 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:50.079608 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:50.082475 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:50.579237 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:50.579288 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:50.579301 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:50.579307 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:50.581489 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:51.079167 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:51.079190 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:51.079199 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:51.079203 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:51.081449 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:51.579115 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:51.579141 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:51.579150 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:51.579154 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:51.581715 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:51.581810 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:52.079437 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:52.079459 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:52.079468 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:52.079472 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:52.081907 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:52.578728 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:52.578751 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:52.578759 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:52.578765 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:52.580988 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:53.078740 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:53.078765 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:53.078775 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:53.078779 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:53.080944 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:53.579709 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:53.579734 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:53.579742 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:53.579747 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:53.582021 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:53.582120 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:54.078693 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:54.078716 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:54.078724 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:54.078730 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:54.080546 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:46:54.579360 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:54.579392 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:54.579405 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:54.579410 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:54.581948 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:55.078851 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:55.078879 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:55.078887 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:55.078891 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:55.080775 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:46:55.579563 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:55.579589 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:55.579601 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:55.579607 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:55.581769 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:56.079549 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:56.079581 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:56.079593 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:56.079599 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:56.081861 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:56.081957 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:56.579636 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:56.579668 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:56.579681 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:56.579687 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:56.581932 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:57.079728 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:57.079759 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:57.079768 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:57.079774 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:57.082018 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:57.578740 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:57.578771 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:57.578780 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:57.578785 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:57.580854 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:58.079678 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:58.079707 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:58.079720 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:58.079729 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:58.082050 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:58.082182 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:46:58.578756 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:58.578782 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:58.578791 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:58.578795 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:58.581066 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:59.078734 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:59.078757 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:59.078774 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:59.078780 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:59.081034 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:46:59.578721 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:46:59.578744 1218338 round_trippers.go:476] Request Headers:
	I0414 14:46:59.578753 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:46:59.578763 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:46:59.580974 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:00.078969 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:00.078994 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:00.079003 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:00.079007 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:00.082007 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:00.578785 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:00.578811 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:00.578820 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:00.578824 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:00.581301 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:00.581380 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:47:01.079013 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:01.079039 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:01.079048 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:01.079055 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:01.081596 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:01.579408 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:01.579432 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:01.579440 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:01.579446 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:01.582056 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:02.078767 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:02.078793 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:02.078801 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:02.078808 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:02.081173 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:02.578977 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:02.579003 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:02.579011 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:02.579015 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:02.581152 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:03.078891 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:03.078916 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:03.078924 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:03.078929 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:03.081511 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:03.081590 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:47:03.579443 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:03.579467 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:03.579476 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:03.579480 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:03.581761 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:04.079644 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:04.079671 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:04.079682 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:04.079690 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:04.081905 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:04.579670 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:04.579694 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:04.579709 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:04.579714 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:04.581999 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:05.079060 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:05.079086 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:05.079103 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:05.079107 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:05.081556 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:05.081648 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:47:05.579331 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:05.579354 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:05.579363 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:05.579368 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:05.581999 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:06.078697 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:06.078722 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:06.078731 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:06.078735 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:06.081185 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:06.578765 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:06.578794 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:06.578803 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:06.578808 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:06.581239 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:07.078923 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:07.078947 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:07.078956 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:07.078960 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:07.081383 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:07.579060 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:07.579084 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:07.579093 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:07.579099 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:07.581293 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:07.581365 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:47:08.079011 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:08.079038 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:08.079047 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:08.079060 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:08.081779 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:08.579583 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:08.579610 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:08.579619 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:08.579625 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:08.582126 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:09.078871 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:09.078899 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:09.078913 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:09.078919 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:09.081422 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:09.579113 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:09.579136 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:09.579145 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:09.579149 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:09.581492 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:09.581590 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:47:10.079561 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:10.079590 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:10.079599 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:10.079604 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:10.082013 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:10.578729 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:10.578757 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:10.578766 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:10.578770 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:10.581017 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:11.078744 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:11.078777 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:11.078790 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:11.078796 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:11.081259 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:11.579039 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:11.579066 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:11.579078 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:11.579096 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:11.581697 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:11.581796 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:47:12.079504 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:12.079534 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:12.079548 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:12.079553 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:12.081953 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:12.578813 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:12.578844 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:12.578853 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:12.578859 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:12.581214 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:13.078982 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:13.079005 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:13.079014 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:13.079019 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:13.081374 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:13.579108 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:13.579133 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:13.579142 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:13.579146 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:13.581434 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:14.079120 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:14.079142 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:14.079151 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:14.079156 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:14.081523 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:14.081627 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:47:14.579217 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:14.579243 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:14.579280 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:14.579288 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:14.581917 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:15.078794 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:15.078818 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:15.078828 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:15.078833 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:15.081278 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:15.578968 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:15.578991 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:15.579000 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:15.579005 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:15.581366 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:16.079084 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:16.079108 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:16.079116 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:16.079123 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:16.081350 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:16.579152 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:16.579185 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:16.579194 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:16.579199 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:16.581334 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:16.581409 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:47:17.079084 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:17.079110 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:17.079119 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:17.079124 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:17.081457 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:17.579167 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:17.579196 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:17.579205 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:17.579209 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:17.581770 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:18.079690 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:18.079724 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:18.079738 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:18.079744 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:18.082163 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:18.578917 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:18.578947 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:18.578956 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:18.578961 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:18.581342 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:18.581438 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:47:19.079038 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:19.079064 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:19.079073 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:19.079077 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:19.081387 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:19.579101 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:19.579128 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:19.579140 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:19.579145 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:19.581661 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:20.079723 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:20.079749 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:20.079760 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:20.079766 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:20.082083 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:20.578795 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:20.578820 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:20.578828 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:20.578834 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:20.581407 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:20.581516 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:47:21.078838 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:21.078863 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:21.078871 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:21.078877 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:21.081177 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:21.578996 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:21.579022 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:21.579032 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:21.579039 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:21.581640 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:22.079506 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:47:22.079533 1218338 round_trippers.go:476] Request Headers:
	I0414 14:47:22.079542 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:47:22.079547 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:47:22.082119 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:47:22.578876 1218338 node_ready.go:38] duration metric: took 4m0.000340348s for node "ha-290859-m02" to be "Ready" ...
	I0414 14:47:22.581156 1218338 out.go:201] 
	W0414 14:47:22.582560 1218338 out.go:270] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0414 14:47:22.582578 1218338 out.go:270] * 
	* 
	W0414 14:47:22.586748 1218338 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:47:22.588090 1218338 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I0414 14:43:04.623487 1218338 out.go:345] Setting OutFile to fd 1 ...
I0414 14:43:04.623603 1218338 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:43:04.623613 1218338 out.go:358] Setting ErrFile to fd 2...
I0414 14:43:04.623616 1218338 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:43:04.623871 1218338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
I0414 14:43:04.624178 1218338 mustload.go:65] Loading cluster: ha-290859
I0414 14:43:04.624584 1218338 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:43:04.625001 1218338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:43:04.625079 1218338 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:43:04.642752 1218338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32823
I0414 14:43:04.643353 1218338 main.go:141] libmachine: () Calling .GetVersion
I0414 14:43:04.643895 1218338 main.go:141] libmachine: Using API Version  1
I0414 14:43:04.643924 1218338 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:43:04.644325 1218338 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:43:04.644572 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
W0414 14:43:04.646354 1218338 host.go:58] "ha-290859-m02" host status: Stopped
I0414 14:43:04.648488 1218338 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
I0414 14:43:04.649697 1218338 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 14:43:04.649750 1218338 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
I0414 14:43:04.649772 1218338 cache.go:56] Caching tarball of preloaded images
I0414 14:43:04.649898 1218338 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0414 14:43:04.649911 1218338 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0414 14:43:04.650074 1218338 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:43:04.650320 1218338 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0414 14:43:04.650431 1218338 start.go:364] duration metric: took 37.1µs to acquireMachinesLock for "ha-290859-m02"
I0414 14:43:04.650454 1218338 start.go:96] Skipping create...Using existing machine configuration
I0414 14:43:04.650465 1218338 fix.go:54] fixHost starting: m02
I0414 14:43:04.650752 1218338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:43:04.650785 1218338 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:43:04.666821 1218338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
I0414 14:43:04.667391 1218338 main.go:141] libmachine: () Calling .GetVersion
I0414 14:43:04.667890 1218338 main.go:141] libmachine: Using API Version  1
I0414 14:43:04.667912 1218338 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:43:04.668346 1218338 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:43:04.668591 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:43:04.668797 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
I0414 14:43:04.670410 1218338 fix.go:112] recreateIfNeeded on ha-290859-m02: state=Stopped err=<nil>
I0414 14:43:04.670462 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
W0414 14:43:04.670632 1218338 fix.go:138] unexpected machine state, will restart: <nil>
I0414 14:43:04.672592 1218338 out.go:177] * Restarting existing kvm2 VM for "ha-290859-m02" ...
I0414 14:43:04.674013 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .Start
I0414 14:43:04.674262 1218338 main.go:141] libmachine: (ha-290859-m02) starting domain...
I0414 14:43:04.674284 1218338 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
I0414 14:43:04.675153 1218338 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
I0414 14:43:04.675736 1218338 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
I0414 14:43:04.676207 1218338 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
I0414 14:43:04.677099 1218338 main.go:141] libmachine: (ha-290859-m02) creating domain...
I0414 14:43:05.925169 1218338 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
I0414 14:43:05.926204 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:05.926702 1218338 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
I0414 14:43:05.926729 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:05.926739 1218338 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
I0414 14:43:05.927368 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:05.927402 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"}
I0414 14:43:05.927426 1218338 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
I0414 14:43:05.927440 1218338 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
I0414 14:43:05.927456 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
I0414 14:43:05.929590 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:05.930000 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:05.930045 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:05.930217 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
I0414 14:43:05.930275 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
I0414 14:43:05.930316 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0414 14:43:05.930334 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
I0414 14:43:05.930343 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
I0414 14:43:17.063533 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
I0414 14:43:17.063857 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
I0414 14:43:17.064504 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:43:17.067442 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.068023 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:17.068051 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.068279 1218338 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
I0414 14:43:17.068470 1218338 machine.go:93] provisionDockerMachine start ...
I0414 14:43:17.068490 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:43:17.068735 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:43:17.070975 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.071289 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:17.071315 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.071465 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:43:17.071637 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:17.071772 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:17.071919 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:43:17.072074 1218338 main.go:141] libmachine: Using SSH client type: native
I0414 14:43:17.072312 1218338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:43:17.072326 1218338 main.go:141] libmachine: About to run SSH command:
hostname
I0414 14:43:17.175355 1218338 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0414 14:43:17.175385 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:43:17.175656 1218338 buildroot.go:166] provisioning hostname "ha-290859-m02"
I0414 14:43:17.175684 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:43:17.175873 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:43:17.178615 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.179092 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:17.179117 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.179351 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:43:17.179575 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:17.179793 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:17.179987 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:43:17.180198 1218338 main.go:141] libmachine: Using SSH client type: native
I0414 14:43:17.180412 1218338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:43:17.180424 1218338 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
I0414 14:43:17.297488 1218338 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02

                                                
                                                
I0414 14:43:17.297519 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:43:17.300166 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.300519 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:17.300541 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.300762 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:43:17.300963 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:17.301163 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:17.301337 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:43:17.301607 1218338 main.go:141] libmachine: Using SSH client type: native
I0414 14:43:17.301831 1218338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:43:17.301850 1218338 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0414 14:43:17.411863 1218338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0414 14:43:17.411929 1218338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
I0414 14:43:17.411985 1218338 buildroot.go:174] setting up certificates
I0414 14:43:17.411999 1218338 provision.go:84] configureAuth start
I0414 14:43:17.412011 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
I0414 14:43:17.412336 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:43:17.414927 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.415376 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:17.415421 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.415590 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:43:17.417818 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.418252 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:17.418278 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:17.418429 1218338 provision.go:143] copyHostCerts
I0414 14:43:17.418468 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
I0414 14:43:17.418500 1218338 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
I0414 14:43:17.418509 1218338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
I0414 14:43:17.418572 1218338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
I0414 14:43:17.418663 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
I0414 14:43:17.418680 1218338 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
I0414 14:43:17.418687 1218338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
I0414 14:43:17.418710 1218338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
I0414 14:43:17.418825 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
I0414 14:43:17.418846 1218338 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
I0414 14:43:17.418850 1218338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
I0414 14:43:17.418875 1218338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
I0414 14:43:17.418943 1218338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
I0414 14:43:18.254475 1218338 provision.go:177] copyRemoteCerts
I0414 14:43:18.254562 1218338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 14:43:18.254594 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:43:18.257284 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.257640 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:18.257664 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.257814 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:43:18.258009 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:18.258186 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:43:18.258340 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:43:18.341300 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0414 14:43:18.341387 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0414 14:43:18.364783 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0414 14:43:18.364868 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0414 14:43:18.387520 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
I0414 14:43:18.387587 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0414 14:43:18.410332 1218338 provision.go:87] duration metric: took 998.314023ms to configureAuth
I0414 14:43:18.410371 1218338 buildroot.go:189] setting minikube options for container-runtime
I0414 14:43:18.410644 1218338 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:43:18.410664 1218338 machine.go:96] duration metric: took 1.342180249s to provisionDockerMachine
I0414 14:43:18.410676 1218338 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
I0414 14:43:18.410693 1218338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 14:43:18.410730 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:43:18.411049 1218338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 14:43:18.411080 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:43:18.414369 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.414811 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:18.414857 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.415064 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:43:18.415266 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:18.415451 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:43:18.415611 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:43:18.498703 1218338 ssh_runner.go:195] Run: cat /etc/os-release
I0414 14:43:18.502691 1218338 info.go:137] Remote host: Buildroot 2023.02.9
I0414 14:43:18.502719 1218338 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
I0414 14:43:18.502797 1218338 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
I0414 14:43:18.502870 1218338 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
I0414 14:43:18.502879 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
I0414 14:43:18.503019 1218338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0414 14:43:18.512126 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
I0414 14:43:18.533276 1218338 start.go:296] duration metric: took 122.582762ms for postStartSetup
I0414 14:43:18.533322 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:43:18.533632 1218338 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0414 14:43:18.533664 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:43:18.536486 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.536870 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:18.536899 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.537075 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:43:18.537285 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:18.537421 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:43:18.537564 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:43:18.617778 1218338 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
I0414 14:43:18.617873 1218338 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0414 14:43:18.653377 1218338 fix.go:56] duration metric: took 14.002904046s for fixHost
I0414 14:43:18.653438 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:43:18.656627 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.657075 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:18.657119 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.657291 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:43:18.657507 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:18.657705 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:18.657889 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:43:18.658139 1218338 main.go:141] libmachine: Using SSH client type: native
I0414 14:43:18.658368 1218338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
I0414 14:43:18.658380 1218338 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0414 14:43:18.764016 1218338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744641798.723290276

                                                
                                                
I0414 14:43:18.764040 1218338 fix.go:216] guest clock: 1744641798.723290276
I0414 14:43:18.764047 1218338 fix.go:229] Guest: 2025-04-14 14:43:18.723290276 +0000 UTC Remote: 2025-04-14 14:43:18.653405465 +0000 UTC m=+14.069454943 (delta=69.884811ms)
I0414 14:43:18.764080 1218338 fix.go:200] guest clock delta is within tolerance: 69.884811ms
I0414 14:43:18.764086 1218338 start.go:83] releasing machines lock for "ha-290859-m02", held for 14.11364342s
I0414 14:43:18.764107 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:43:18.764505 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:43:18.767315 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.767787 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:18.767817 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.768000 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:43:18.768497 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:43:18.768687 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
I0414 14:43:18.768790 1218338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 14:43:18.768841 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:43:18.768966 1218338 ssh_runner.go:195] Run: systemctl --version
I0414 14:43:18.768999 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
I0414 14:43:18.772089 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.772388 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.772576 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:18.772604 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.772738 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:43:18.772879 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:18.772916 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:18.772940 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:18.773152 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:43:18.773160 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
I0414 14:43:18.773367 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
I0414 14:43:18.773402 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:43:18.773529 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
I0414 14:43:18.773640 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
I0414 14:43:18.853637 1218338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0414 14:43:18.880220 1218338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0414 14:43:18.880302 1218338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 14:43:18.896482 1218338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0414 14:43:18.896518 1218338 start.go:495] detecting cgroup driver to use...
I0414 14:43:18.896611 1218338 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0414 14:43:18.924234 1218338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 14:43:18.936976 1218338 docker.go:217] disabling cri-docker service (if available) ...
I0414 14:43:18.937027 1218338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0414 14:43:18.950926 1218338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0414 14:43:18.963540 1218338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0414 14:43:19.075727 1218338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0414 14:43:19.219176 1218338 docker.go:233] disabling docker service ...
I0414 14:43:19.219282 1218338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0414 14:43:19.233471 1218338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0414 14:43:19.247350 1218338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0414 14:43:19.383935 1218338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0414 14:43:19.519329 1218338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0414 14:43:19.537595 1218338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 14:43:19.555241 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0414 14:43:19.564683 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0414 14:43:19.574882 1218338 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0414 14:43:19.574983 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0414 14:43:19.584168 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:43:19.593483 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0414 14:43:19.602495 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:43:19.611477 1218338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 14:43:19.620843 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0414 14:43:19.630692 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0414 14:43:19.640368 1218338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0414 14:43:19.650247 1218338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 14:43:19.659060 1218338 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:

                                                
                                                
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0414 14:43:19.659112 1218338 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0414 14:43:19.670671 1218338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 14:43:19.679947 1218338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:43:19.789643 1218338 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 14:43:19.817223 1218338 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0414 14:43:19.817343 1218338 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 14:43:19.821964 1218338 retry.go:31] will retry after 1.140799605s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0414 14:43:20.963366 1218338 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 14:43:20.968958 1218338 start.go:563] Will wait 60s for crictl version
I0414 14:43:20.969028 1218338 ssh_runner.go:195] Run: which crictl
I0414 14:43:20.973211 1218338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0414 14:43:21.014989 1218338 start.go:579] Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.7.23
RuntimeApiVersion:  v1
I0414 14:43:21.015080 1218338 ssh_runner.go:195] Run: containerd --version
I0414 14:43:21.040994 1218338 ssh_runner.go:195] Run: containerd --version
I0414 14:43:21.066310 1218338 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
I0414 14:43:21.067828 1218338 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
I0414 14:43:21.070588 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:21.071009 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
I0414 14:43:21.071042 1218338 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
I0414 14:43:21.071289 1218338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
I0414 14:43:21.075369 1218338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 14:43:21.087007 1218338 mustload.go:65] Loading cluster: ha-290859
I0414 14:43:21.087242 1218338 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:43:21.087528 1218338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:43:21.087570 1218338 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:43:21.103296 1218338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
I0414 14:43:21.103781 1218338 main.go:141] libmachine: () Calling .GetVersion
I0414 14:43:21.104282 1218338 main.go:141] libmachine: Using API Version  1
I0414 14:43:21.104310 1218338 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:43:21.104652 1218338 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:43:21.104831 1218338 main.go:141] libmachine: (ha-290859) Calling .GetState
I0414 14:43:21.106301 1218338 host.go:66] Checking if "ha-290859" exists ...
I0414 14:43:21.106620 1218338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:43:21.106658 1218338 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:43:21.121923 1218338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
I0414 14:43:21.122360 1218338 main.go:141] libmachine: () Calling .GetVersion
I0414 14:43:21.122757 1218338 main.go:141] libmachine: Using API Version  1
I0414 14:43:21.122776 1218338 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:43:21.123145 1218338 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:43:21.123377 1218338 main.go:141] libmachine: (ha-290859) Calling .DriverName
I0414 14:43:21.123560 1218338 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
I0414 14:43:21.123574 1218338 certs.go:194] generating shared ca certs ...
I0414 14:43:21.123591 1218338 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 14:43:21.123760 1218338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
I0414 14:43:21.123826 1218338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
I0414 14:43:21.123843 1218338 certs.go:256] generating profile certs ...
I0414 14:43:21.123948 1218338 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
I0414 14:43:21.124101 1218338 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
I0414 14:43:21.124205 1218338 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
I0414 14:43:21.124221 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0414 14:43:21.124242 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0414 14:43:21.124260 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0414 14:43:21.124277 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0414 14:43:21.124292 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0414 14:43:21.124317 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0414 14:43:21.124336 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0414 14:43:21.124363 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0414 14:43:21.124445 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
W0414 14:43:21.124489 1218338 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
I0414 14:43:21.124504 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
I0414 14:43:21.124538 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
I0414 14:43:21.124565 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
I0414 14:43:21.124604 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
I0414 14:43:21.124656 1218338 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
I0414 14:43:21.124697 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
I0414 14:43:21.124717 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
I0414 14:43:21.124733 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0414 14:43:21.124765 1218338 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
I0414 14:43:21.127844 1218338 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:43:21.128444 1218338 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
I0414 14:43:21.128467 1218338 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
I0414 14:43:21.128570 1218338 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
I0414 14:43:21.128727 1218338 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
I0414 14:43:21.128927 1218338 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
I0414 14:43:21.129066 1218338 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
I0414 14:43:21.203640 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I0414 14:43:21.208444 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0414 14:43:21.218431 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I0414 14:43:21.222260 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I0414 14:43:21.232210 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I0414 14:43:21.238516 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0414 14:43:21.251778 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I0414 14:43:21.256194 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
I0414 14:43:21.266052 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I0414 14:43:21.269738 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0414 14:43:21.280173 1218338 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I0414 14:43:21.284189 1218338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
I0414 14:43:21.294685 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0414 14:43:21.318963 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0414 14:43:21.341043 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0414 14:43:21.362799 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0414 14:43:21.385596 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0414 14:43:21.410567 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0414 14:43:21.436704 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0414 14:43:21.460821 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0414 14:43:21.482923 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
I0414 14:43:21.504682 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
I0414 14:43:21.527747 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0414 14:43:21.550240 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0414 14:43:21.565923 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I0414 14:43:21.581596 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0414 14:43:21.597754 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
I0414 14:43:21.612919 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0414 14:43:21.630203 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
I0414 14:43:21.646262 1218338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0414 14:43:21.662432 1218338 ssh_runner.go:195] Run: openssl version
I0414 14:43:21.667747 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
I0414 14:43:21.678227 1218338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
I0414 14:43:21.682473 1218338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
I0414 14:43:21.682522 1218338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
I0414 14:43:21.687984 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
I0414 14:43:21.698274 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0414 14:43:21.708457 1218338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0414 14:43:21.712582 1218338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
I0414 14:43:21.712643 1218338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0414 14:43:21.717764 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0414 14:43:21.727364 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
I0414 14:43:21.736954 1218338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
I0414 14:43:21.740819 1218338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
I0414 14:43:21.740875 1218338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
I0414 14:43:21.746101 1218338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
I0414 14:43:21.756453 1218338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0414 14:43:21.760314 1218338 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0414 14:43:21.760364 1218338 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
I0414 14:43:21.760467 1218338 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0414 14:43:21.760491 1218338 kube-vip.go:115] generating kube-vip config ...
I0414 14:43:21.760524 1218338 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0414 14:43:21.776851 1218338 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0414 14:43:21.776993 1218338 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.10
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0414 14:43:21.777061 1218338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0414 14:43:21.787380 1218338 binaries.go:47] Didn't find k8s binaries: didn't find preexisting kubeadm
Initiating transfer...
I0414 14:43:21.787430 1218338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
I0414 14:43:21.796684 1218338 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
I0414 14:43:21.796708 1218338 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
I0414 14:43:21.796747 1218338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0414 14:43:21.796712 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
I0414 14:43:21.796686 1218338 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
I0414 14:43:21.796859 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
I0414 14:43:21.796905 1218338 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
I0414 14:43:21.796964 1218338 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
I0414 14:43:21.800792 1218338 ssh_runner.go:356] copy: skipping /var/lib/minikube/binaries/v1.32.2/kubectl (exists)
I0414 14:43:21.813606 1218338 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
I0414 14:43:21.813684 1218338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
I0414 14:43:21.813696 1218338 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
I0414 14:43:21.813725 1218338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
I0414 14:43:21.829252 1218338 ssh_runner.go:356] copy: skipping /var/lib/minikube/binaries/v1.32.2/kubelet (exists)
I0414 14:43:22.206389 1218338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I0414 14:43:22.215841 1218338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
I0414 14:43:22.231674 1218338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0414 14:43:22.247180 1218338 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
I0414 14:43:22.262773 1218338 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
I0414 14:43:22.266269 1218338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 14:43:22.277201 1218338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:43:22.383541 1218338 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 14:43:22.400665 1218338 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 14:43:22.400805 1218338 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0414 14:43:22.400994 1218338 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:43:22.402588 1218338 out.go:177] * Verifying Kubernetes components...
I0414 14:43:22.402588 1218338 out.go:177] * Enabled addons: 
I0414 14:43:22.404228 1218338 addons.go:514] duration metric: took 3.436827ms for enable addons: enabled=[]
I0414 14:43:22.404273 1218338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:43:22.560791 1218338 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 14:43:22.577076 1218338 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
I0414 14:43:22.577330 1218338 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W0414 14:43:22.577452 1218338 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
I0414 14:43:22.577996 1218338 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0414 14:43:22.578028 1218338 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0414 14:43:22.578036 1218338 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0414 14:43:22.578043 1218338 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0414 14:43:22.578053 1218338 cert_rotation.go:140] Starting client certificate rotation controller
I0414 14:43:22.578501 1218338 node_ready.go:35] waiting up to 6m0s for node "ha-290859-m02" to be "Ready" ...
I0414 14:43:22.578621 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:22.578633 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:22.578644 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:22.578650 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:22.587187 1218338 round_trippers.go:581] Response Status: 404 Not Found in 8 milliseconds
I0414 14:43:23.079083 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:23.079111 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:23.079132 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:23.079141 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:23.081572 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:23.579387 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:23.579416 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:23.579427 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:23.579432 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:23.581494 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:24.079409 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:24.079462 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:24.079474 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:24.079480 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:24.082289 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:24.579098 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:24.579122 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:24.579132 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:24.579138 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:24.581936 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:24.582034 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:25.079105 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:25.079138 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:25.079151 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:25.079169 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:25.082256 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:43:25.579031 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:25.579061 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:25.579073 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:25.579081 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:25.581956 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:26.079804 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:26.079845 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:26.079857 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:26.079866 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:26.082153 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:26.578894 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:26.578918 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:26.578926 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:26.578931 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:26.581316 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:27.079205 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:27.079234 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:27.079273 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:27.079281 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:27.081678 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:27.081787 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:27.579587 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:27.579626 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:27.579640 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:27.579650 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:27.581685 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:28.079514 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:28.079545 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:28.079558 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:28.079566 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:28.081933 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:28.579777 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:28.579826 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:28.579840 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:28.579849 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:28.582582 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:29.079390 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:29.079420 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:29.079432 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:29.079438 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:29.081944 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:29.082039 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:29.578754 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:29.578782 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:29.578791 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:29.578795 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:29.581380 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:30.079486 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:30.079513 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:30.079524 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:30.079530 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:30.081864 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:30.579588 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:30.579614 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:30.579623 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:30.579628 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:30.582354 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:31.079208 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:31.079239 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:31.079269 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:31.079277 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:31.081611 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:31.579118 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:31.579146 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:31.579157 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:31.579162 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:31.581776 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:31.581867 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:32.079542 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:32.079568 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:32.079577 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:32.079582 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:32.082093 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:32.578853 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:32.578878 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:32.578886 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:32.578892 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:32.581294 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:33.079033 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:33.079061 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:33.079069 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:33.079075 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:33.081525 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:33.579237 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:33.579291 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:33.579301 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:33.579308 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:33.581843 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:33.581970 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:34.079604 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:34.079690 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:34.079704 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:34.079711 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:34.082676 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:34.579435 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:34.579466 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:34.579478 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:34.579484 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:34.581810 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:35.078863 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:35.078888 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:35.078906 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:35.078913 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:35.081289 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:35.579156 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:35.579188 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:35.579200 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:35.579216 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:35.582113 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:35.582239 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:36.079017 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:36.079048 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:36.079058 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:36.079062 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:36.081788 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:36.579731 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:36.579757 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:36.579766 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:36.579770 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:36.582201 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:37.079041 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:37.079066 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:37.079075 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:37.079079 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:37.081587 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:37.579586 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:37.579612 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:37.579621 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:37.579626 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:37.581966 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:38.078791 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:38.078815 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:38.078846 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:38.078850 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:38.081841 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:38.081938 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:38.578700 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:38.578727 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:38.578741 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:38.578746 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:38.582013 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:43:39.078858 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:39.078884 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:39.078896 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:39.078902 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:39.081941 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:43:39.578824 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:39.578846 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:39.578861 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:39.578865 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:39.581477 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:40.079387 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:40.079412 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:40.079421 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:40.079425 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:40.082038 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:40.082199 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:40.578997 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:40.579029 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:40.579041 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:40.579050 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:40.581547 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:41.079465 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:41.079491 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:41.079500 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:41.079504 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:41.082129 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:41.578843 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:41.578871 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:41.578879 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:41.578884 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:41.581232 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:42.078964 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:42.078990 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:42.078998 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:42.079003 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:42.081375 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:42.579230 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:42.579267 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:42.579276 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:42.579281 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:42.581682 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:42.581773 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:43.079465 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:43.079493 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:43.079502 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:43.079506 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:43.081928 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:43.579747 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:43.579774 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:43.579784 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:43.579790 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:43.582368 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:44.079125 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:44.079154 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:44.079166 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:44.079174 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:44.081421 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:44.579238 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:44.579290 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:44.579302 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:44.579307 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:44.581981 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:44.582166 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:45.079075 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:45.079100 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:45.079108 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:45.079113 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:45.081931 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:45.579678 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:45.579703 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:45.579711 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:45.579716 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:45.582532 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:46.079284 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:46.079316 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:46.079325 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:46.079331 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:46.082156 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:46.579052 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:46.579076 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:46.579084 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:46.579089 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:46.581465 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:47.079163 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:47.079185 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:47.079194 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:47.079198 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:47.081719 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:47.081805 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:47.579612 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:47.579661 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:47.579674 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:47.579680 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:47.582384 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:48.079096 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:48.079120 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:48.079129 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:48.079134 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:48.081415 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:48.579138 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:48.579165 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:48.579174 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:48.579178 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:48.581799 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:49.079347 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:49.079371 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:49.079380 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:49.079386 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:49.081840 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:49.081952 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:49.579574 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:49.579604 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:49.579613 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:49.579618 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:49.582127 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:50.079065 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:50.079095 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:50.079107 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:50.079112 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:50.081263 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:50.578959 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:50.578985 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:50.578995 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:50.579000 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:50.581465 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:51.079133 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:51.079154 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:51.079172 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:51.079176 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:51.081639 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:51.579464 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:51.579490 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:51.579499 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:51.579503 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:51.582237 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:51.582346 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:52.078981 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:52.079011 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:52.079019 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:52.079024 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:52.081326 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:52.579242 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:52.579297 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:52.579311 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:52.579318 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:52.581841 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:53.079751 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:53.079779 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:53.079790 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:53.079796 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:53.082207 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:53.579028 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:53.579058 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:53.579068 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:53.579073 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:53.581502 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:54.079522 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:54.079550 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:54.079559 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:54.079564 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:54.082211 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:54.082299 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:54.579101 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:54.579130 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:54.579141 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:54.579146 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:54.581963 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:55.079000 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:55.079025 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:55.079034 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:55.079038 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:55.081711 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:55.579631 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:55.579661 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:55.579673 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:55.579681 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:55.582267 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:56.079098 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:56.079121 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:56.079131 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:56.079134 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:56.081794 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:56.579747 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:56.579776 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:56.579789 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:56.579797 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:56.581895 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:56.581984 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:57.079664 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:57.079689 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:57.079699 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:57.079703 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:57.083498 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:43:57.579230 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:57.579294 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:57.579305 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:57.579312 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:57.581617 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:58.079439 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:58.079467 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:58.079476 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:58.079480 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:58.081717 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:58.579507 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:58.579532 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:58.579541 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:58.579552 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:58.582104 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:58.582220 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:43:59.078765 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:59.078789 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:59.078798 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:59.078803 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:59.081059 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:43:59.578779 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:43:59.578802 1218338 round_trippers.go:476] Request Headers:
I0414 14:43:59.578811 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:43:59.578815 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:43:59.581407 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:00.079393 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:00.079416 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:00.079424 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:00.079428 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:00.081556 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:00.579407 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:00.579438 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:00.579452 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:00.579460 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:00.582141 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:01.078885 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:01.078910 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:01.078919 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:01.078923 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:01.080902 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:44:01.081009 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:01.579620 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:01.579646 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:01.579658 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:01.579664 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:01.582267 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:02.079010 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:02.079034 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:02.079043 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:02.079046 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:02.081075 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:02.578979 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:02.579039 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:02.579053 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:02.579062 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:02.581262 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:03.079000 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:03.079031 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:03.079044 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:03.079050 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:03.081692 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:03.081781 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:03.579506 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:03.579534 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:03.579543 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:03.579557 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:03.581857 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:04.079639 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:04.079665 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:04.079676 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:04.079682 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:04.082019 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:04.578781 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:04.578806 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:04.578817 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:04.578824 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:04.581130 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:05.079724 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:05.079747 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:05.079756 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:05.079760 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:05.081707 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:44:05.081808 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:05.579511 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:05.579546 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:05.579558 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:05.579565 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:05.581853 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:06.079660 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:06.079689 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:06.079701 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:06.079707 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:06.081787 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:06.579775 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:06.579807 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:06.579820 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:06.579828 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:06.582067 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:07.078873 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:07.078904 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:07.078914 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:07.078920 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:07.081414 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:07.579204 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:07.579238 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:07.579272 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:07.579282 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:07.581531 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:07.581638 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:08.079216 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:08.079243 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:08.079275 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:08.079283 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:08.081799 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:08.579754 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:08.579782 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:08.579796 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:08.579830 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:08.582260 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:09.079064 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:09.079087 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:09.079095 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:09.079100 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:09.081583 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:09.579487 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:09.579516 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:09.579527 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:09.579534 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:09.581722 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:09.581800 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:10.079748 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:10.079772 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:10.079780 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:10.079785 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:10.082422 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:10.579351 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:10.579377 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:10.579387 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:10.579392 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:10.581776 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:11.078709 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:11.078735 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:11.078748 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:11.078755 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:11.081167 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:11.578920 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:11.578944 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:11.578953 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:11.578958 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:11.581339 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:12.079176 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:12.079202 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:12.079211 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:12.079215 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:12.081634 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:12.081717 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:12.579484 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:12.579513 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:12.579526 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:12.579532 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:12.581711 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:13.079617 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:13.079641 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:13.079650 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:13.079654 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:13.082000 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:13.578843 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:13.578875 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:13.578888 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:13.578895 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:13.581189 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:14.079049 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:14.079078 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:14.079087 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:14.079092 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:14.081328 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:14.579144 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:14.579179 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:14.579193 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:14.579198 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:14.581676 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:14.581793 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:15.079665 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:15.079691 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:15.079702 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:15.079708 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:15.082111 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:15.578931 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:15.578955 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:15.578964 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:15.578968 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:15.581068 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:16.078898 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:16.078925 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:16.078933 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:16.078939 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:16.081177 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:16.578878 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:16.578901 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:16.578911 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:16.578914 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:16.581275 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:17.078995 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:17.079021 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:17.079029 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:17.079035 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:17.081176 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:17.081294 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:17.578922 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:17.578954 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:17.578964 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:17.578970 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:17.581153 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:18.078859 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:18.078887 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:18.078895 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:18.078900 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:18.081261 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:18.578963 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:18.578987 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:18.579016 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:18.579024 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:18.581283 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:19.079026 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:19.079051 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:19.079062 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:19.079069 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:19.081227 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:19.081332 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:19.578879 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:19.578901 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:19.578911 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:19.578915 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:19.581378 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:20.079441 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:20.079472 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:20.079487 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:20.079496 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:20.081871 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:20.579658 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:20.579685 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:20.579697 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:20.579703 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:20.582109 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:21.078816 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:21.078845 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:21.078857 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:21.078862 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:21.081025 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:21.578697 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:21.578722 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:21.578730 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:21.578735 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:21.581175 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:21.581267 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:22.079011 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:22.079039 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:22.079050 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:22.079057 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:22.081173 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:22.578957 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:22.578986 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:22.578996 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:22.579004 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:22.581446 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:23.079443 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:23.079483 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:23.079496 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:23.079504 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:23.081822 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:23.578726 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:23.578750 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:23.578760 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:23.578765 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:23.580898 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:24.078744 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:24.078774 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:24.078784 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:24.078792 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:24.081371 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:24.081454 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:24.579239 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:24.579286 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:24.579299 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:24.579306 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:24.581800 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:25.078770 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:25.078794 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:25.078820 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:25.078828 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:25.081230 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:25.579056 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:25.579133 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:25.579143 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:25.579147 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:25.581419 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:26.079273 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:26.079299 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:26.079307 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:26.079312 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:26.082004 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:26.082108 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:26.579791 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:26.579814 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:26.579823 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:26.579827 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:26.582110 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:27.078788 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:27.078812 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:27.078821 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:27.078825 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:27.081353 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:27.579084 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:27.579108 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:27.579117 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:27.579123 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:27.581569 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:28.079325 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:28.079349 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:28.079359 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:28.079363 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:28.081741 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:28.579531 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:28.579557 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:28.579565 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:28.579570 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:28.582642 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:44:28.582727 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:29.079451 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:29.079478 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:29.079486 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:29.079492 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:29.081618 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:29.579408 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:29.579430 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:29.579444 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:29.579448 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:29.582042 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:30.078839 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:30.078862 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:30.078870 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:30.078875 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:30.081093 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:30.578842 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:30.578867 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:30.578875 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:30.578880 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:30.581316 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:31.079050 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:31.079078 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:31.079087 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:31.079092 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:31.081888 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:31.082009 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:31.579477 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:31.579502 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:31.579511 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:31.579516 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:31.581776 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:32.079629 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:32.079658 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:32.079667 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:32.079673 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:32.081880 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:32.580035 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:32.580316 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:32.580367 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:32.580385 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:32.583693 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:44:33.079518 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:33.079549 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:33.079562 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:33.079571 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:33.081843 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:33.579710 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:33.579741 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:33.579755 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:33.579759 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:33.582572 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:33.582653 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:34.079368 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:34.079391 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:34.079400 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:34.079403 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:34.082216 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:34.578888 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:34.578912 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:34.578920 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:34.578926 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:34.581214 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:35.079378 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:35.079401 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:35.079410 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:35.079417 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:35.082898 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:44:35.579679 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:35.579708 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:35.579721 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:35.579728 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:35.582194 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:36.078871 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:36.078899 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:36.078911 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:36.078916 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:36.081218 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:36.081304 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:36.579092 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:36.579118 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:36.579127 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:36.579131 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:36.581296 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:37.079005 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:37.079029 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:37.079037 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:37.079044 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:37.081402 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:37.579094 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:37.579124 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:37.579137 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:37.579144 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:37.581486 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:38.079366 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:38.079399 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:38.079427 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:38.079434 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:38.081843 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:38.081943 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:38.579620 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:38.579650 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:38.579662 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:38.579668 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:38.582642 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:39.079413 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:39.079444 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:39.079454 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:39.079460 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:39.082460 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:39.579210 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:39.579243 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:39.579282 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:39.579290 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:39.581628 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:40.079720 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:40.079744 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:40.079753 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:40.079759 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:40.081960 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:40.082039 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:40.579790 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:40.579818 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:40.579826 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:40.579830 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:40.582504 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:41.079223 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:41.079263 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:41.079272 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:41.079276 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:41.081636 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:41.579389 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:41.579414 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:41.579423 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:41.579427 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:41.582029 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:42.078772 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:42.078798 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:42.078807 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:42.078811 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:42.081134 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:42.578976 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:42.579003 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:42.579019 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:42.579026 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:42.581249 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:42.581333 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:43.078986 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:43.079009 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:43.079018 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:43.079023 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:43.081563 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:43.579350 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:43.579376 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:43.579385 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:43.579390 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:43.581775 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:44.079571 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:44.079597 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:44.079605 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:44.079609 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:44.082001 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:44.578701 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:44.578727 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:44.578737 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:44.578745 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:44.581055 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:45.078970 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:45.078996 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:45.079005 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:45.079009 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:45.080990 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:44:45.081099 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:45.579806 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:45.579836 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:45.579848 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:45.579857 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:45.582116 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:46.078847 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:46.078875 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:46.078883 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:46.078887 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:46.081413 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:46.579181 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:46.579211 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:46.579220 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:46.579224 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:46.581585 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:47.079446 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:47.079471 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:47.079479 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:47.079484 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:47.081841 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:47.081937 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:47.579606 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:47.579630 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:47.579638 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:47.579643 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:47.581974 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:48.079716 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:48.079741 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:48.079753 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:48.079759 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:48.082090 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:48.578797 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:48.578826 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:48.578837 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:48.578843 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:48.581124 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:49.078839 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:49.078864 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:49.078874 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:49.078880 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:49.081192 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:49.578861 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:49.578886 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:49.578897 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:49.578901 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:49.581412 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:49.581621 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:50.079395 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:50.079497 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:50.079513 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:50.079519 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:50.084014 1218338 round_trippers.go:581] Response Status: 404 Not Found in 4 milliseconds
I0414 14:44:50.578740 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:50.578766 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:50.578775 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:50.578780 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:50.581202 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:51.078916 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:51.078941 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:51.078950 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:51.078955 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:51.081289 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:51.578975 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:51.579001 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:51.579011 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:51.579015 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:51.581819 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:51.581914 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:52.079612 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:52.079643 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:52.079655 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:52.079664 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:52.082050 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:52.578920 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:52.578950 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:52.578959 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:52.578986 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:52.581282 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:53.079005 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:53.079031 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:53.079039 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:53.079044 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:53.081224 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:53.578959 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:53.579011 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:53.579022 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:53.579028 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:53.581154 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:54.078856 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:54.078882 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:54.078892 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:54.078898 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:54.081640 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:54.081730 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:54.579462 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:54.579491 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:54.579500 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:54.579504 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:54.582441 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:55.079596 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:55.079623 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:55.079633 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:55.079642 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:55.081954 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:55.579740 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:55.579766 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:55.579777 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:55.579784 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:55.581983 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:56.079728 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:56.079753 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:56.079765 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:56.079771 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:56.082229 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:56.082333 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:56.578837 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:56.578864 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:56.578876 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:56.578882 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:56.581550 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:57.079315 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:57.079346 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:57.079356 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:57.079362 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:57.081705 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:57.579583 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:57.579614 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:57.579623 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:57.579629 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:57.582669 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:44:58.079497 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:58.079524 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:58.079533 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:58.079538 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:58.081727 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:58.579445 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:58.579470 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:58.579483 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:58.579488 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:58.581977 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:58.582093 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:44:59.078717 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:59.078743 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:59.078755 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:59.078760 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:59.081170 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:44:59.578849 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:44:59.578871 1218338 round_trippers.go:476] Request Headers:
I0414 14:44:59.578879 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:44:59.578884 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:44:59.581365 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:00.079386 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:00.079409 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:00.079419 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:00.079425 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:00.081674 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:00.579509 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:00.579612 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:00.579638 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:00.579646 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:00.582215 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:00.582370 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:01.078905 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:01.078931 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:01.078940 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:01.078944 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:01.081352 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:01.579387 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:01.579415 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:01.579425 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:01.579430 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:01.582052 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:02.078751 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:02.078778 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:02.078788 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:02.078793 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:02.080744 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:45:02.579599 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:02.579621 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:02.579629 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:02.579634 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:02.582039 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:03.078748 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:03.078773 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:03.078782 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:03.078785 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:03.081403 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:03.081517 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:03.579213 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:03.579241 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:03.579274 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:03.579283 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:03.581468 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:04.079062 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:04.079085 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:04.079093 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:04.079097 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:04.080979 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:45:04.578749 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:04.578774 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:04.578783 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:04.578787 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:04.581610 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:05.079520 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:05.079544 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:05.079554 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:05.079559 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:05.081822 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:05.081903 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:05.579620 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:05.579644 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:05.579652 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:05.579656 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:05.581960 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:06.079692 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:06.079714 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:06.079729 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:06.079734 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:06.081682 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:45:06.579412 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:06.579436 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:06.579444 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:06.579450 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:06.581704 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:07.079061 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:07.079085 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:07.079116 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:07.079121 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:07.081921 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:07.082015 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:07.579302 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:07.579329 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:07.579337 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:07.579342 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:07.581902 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:08.079665 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:08.079690 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:08.079699 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:08.079703 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:08.082224 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:08.578940 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:08.578964 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:08.578973 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:08.578977 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:08.581559 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:09.079392 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:09.079418 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:09.079427 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:09.079432 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:09.081596 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:09.579388 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:09.579417 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:09.579430 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:09.579439 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:09.582142 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:09.582229 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:10.078898 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:10.078921 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:10.078931 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:10.078934 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:10.081677 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:10.579467 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:10.579494 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:10.579502 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:10.579512 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:10.582156 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:11.078858 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:11.078887 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:11.078905 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:11.078909 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:11.081245 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:11.578893 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:11.578920 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:11.578929 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:11.578933 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:11.581344 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:12.079028 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:12.079053 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:12.079062 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:12.079066 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:12.081499 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:12.081602 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:12.579473 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:12.579501 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:12.579513 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:12.579521 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:12.585019 1218338 round_trippers.go:581] Response Status: 404 Not Found in 5 milliseconds
I0414 14:45:13.078731 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:13.078754 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:13.078763 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:13.078767 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:13.081183 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:13.578902 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:13.578931 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:13.578958 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:13.578963 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:13.581183 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:14.078893 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:14.078918 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:14.078927 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:14.078931 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:14.081425 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:14.579129 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:14.579154 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:14.579167 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:14.579173 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:14.581658 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:14.581744 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:15.079725 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:15.079748 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:15.079757 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:15.079761 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:15.082141 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:15.578833 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:15.578858 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:15.578867 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:15.578870 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:15.581476 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:16.079180 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:16.079206 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:16.079214 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:16.079220 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:16.081680 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:16.579294 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:16.579318 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:16.579330 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:16.579338 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:16.581855 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:16.581965 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:17.079615 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:17.079641 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:17.079650 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:17.079655 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:17.082045 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:17.578760 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:17.578783 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:17.578791 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:17.578805 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:17.581428 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:18.079123 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:18.079149 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:18.079162 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:18.079167 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:18.081416 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:18.579129 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:18.579156 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:18.579167 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:18.579173 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:18.581634 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:19.079390 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:19.079417 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:19.079428 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:19.079433 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:19.081832 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:19.081925 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:19.579587 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:19.579613 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:19.579637 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:19.579644 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:19.581903 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:20.078941 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:20.078964 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:20.078973 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:20.078977 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:20.081439 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:20.579141 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:20.579171 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:20.579184 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:20.579192 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:20.581851 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:21.079651 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:21.079678 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:21.079688 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:21.079693 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:21.081986 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:21.082074 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:21.578843 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:21.578868 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:21.578877 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:21.578882 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:21.581378 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:22.079080 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:22.079105 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:22.079113 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:22.079118 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:22.081386 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:22.579355 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:22.579378 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:22.579387 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:22.579392 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:22.581697 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:23.079424 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:23.079455 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:23.079467 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:23.079473 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:23.081832 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:23.579608 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:23.579633 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:23.579643 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:23.579650 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:23.582061 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:23.582160 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:24.078760 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:24.078783 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:24.078791 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:24.078799 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:24.081871 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:45:24.579667 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:24.579699 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:24.579711 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:24.579719 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:24.582082 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:25.079067 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:25.079090 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:25.079099 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:25.079104 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:25.081401 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:25.579088 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:25.579135 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:25.579144 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:25.579149 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:25.582163 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:25.582288 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:26.078840 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:26.078865 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:26.078877 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:26.078885 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:26.081398 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:26.579158 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:26.579190 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:26.579204 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:26.579210 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:26.582046 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:27.078717 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:27.078742 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:27.078751 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:27.078757 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:27.081530 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:27.579280 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:27.579304 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:27.579315 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:27.579337 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:27.581528 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:28.078761 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:28.078787 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:28.078796 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:28.078801 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:28.082406 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:45:28.082518 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:28.579122 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:28.579146 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:28.579155 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:28.579159 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:28.581509 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:29.079220 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:29.079249 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:29.079286 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:29.079294 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:29.081303 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:45:29.579373 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:29.579396 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:29.579405 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:29.579409 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:29.583327 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:45:30.079391 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:30.079423 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:30.079443 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:30.079450 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:30.082014 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:30.578729 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:30.578755 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:30.578766 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:30.578771 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:30.582605 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:45:30.582724 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:31.079409 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:31.079434 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:31.079443 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:31.079448 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:31.081773 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:31.579648 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:31.579675 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:31.579684 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:31.579690 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:31.582165 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:32.079604 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:32.079628 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:32.079637 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:32.079643 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:32.082773 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:45:32.579630 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:32.579653 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:32.579661 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:32.579666 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:32.582236 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:33.078952 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:33.078975 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:33.078984 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:33.078987 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:33.081515 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:33.081612 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:33.579243 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:33.579285 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:33.579294 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:33.579298 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:33.582168 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:34.078876 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:34.078900 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:34.078912 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:34.078918 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:34.081469 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:34.579176 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:34.579200 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:34.579212 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:34.579226 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:34.581966 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:35.079011 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:35.079037 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:35.079045 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:35.079048 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:35.081854 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:35.081969 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:35.579653 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:35.579680 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:35.579688 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:35.579692 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:35.582634 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:36.079465 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:36.079505 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:36.079518 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:36.079525 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:36.082232 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:36.578985 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:36.579012 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:36.579021 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:36.579025 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:36.581719 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:37.079529 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:37.079554 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:37.079563 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:37.079568 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:37.082519 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:37.082616 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:37.579315 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:37.579340 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:37.579349 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:37.579353 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:37.582715 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:45:38.079441 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:38.079467 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:38.079475 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:38.079481 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:38.082843 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:45:38.579627 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:38.579659 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:38.579674 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:38.579681 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:38.582578 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:39.079481 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:39.079515 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:39.079528 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:39.079538 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:39.082190 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:39.578870 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:39.578894 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:39.578905 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:39.578911 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:39.581160 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:39.581263 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:40.079380 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:40.079406 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:40.079415 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:40.079418 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:40.081876 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:40.579769 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:40.579795 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:40.579811 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:40.579816 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:40.582505 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:41.079231 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:41.079278 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:41.079287 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:41.079293 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:41.081591 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:41.579354 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:41.579379 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:41.579389 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:41.579393 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:41.581850 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:41.581964 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:42.079676 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:42.079702 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:42.079714 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:42.079717 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:42.082019 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:42.578954 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:42.578978 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:42.578986 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:42.578990 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:42.581567 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:43.079357 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:43.079381 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:43.079393 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:43.079399 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:43.081873 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:43.579683 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:43.579708 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:43.579721 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:43.579728 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:43.582317 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:43.582442 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:44.079001 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:44.079026 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:44.079035 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:44.079042 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:44.081466 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:44.579175 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:44.579199 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:44.579207 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:44.579211 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:44.582060 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:45.079070 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:45.079100 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:45.079109 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:45.079114 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:45.081517 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:45.578846 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:45.578871 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:45.578880 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:45.578884 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:45.581260 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:46.079043 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:46.079072 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:46.079081 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:46.079087 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:46.081668 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:46.081781 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:46.579558 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:46.579586 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:46.579595 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:46.579601 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:46.581993 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:47.078748 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:47.078772 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:47.078781 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:47.078784 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:47.081572 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:47.579206 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:47.579238 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:47.579274 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:47.579283 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:47.581855 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:48.079643 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:48.079669 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:48.079684 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:48.079688 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:48.082004 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:48.082098 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:48.578737 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:48.578772 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:48.578782 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:48.578786 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:48.581035 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:49.078729 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:49.078754 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:49.078762 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:49.078768 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:49.081037 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:49.578737 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:49.578762 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:49.578773 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:49.578777 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:49.581160 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:50.079247 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:50.079282 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:50.079291 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:50.079297 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:50.081683 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:50.579536 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:50.579567 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:50.579580 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:50.579587 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:50.582169 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:50.582259 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:51.078872 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:51.078904 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:51.078937 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:51.078946 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:51.081619 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:51.579478 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:51.579510 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:51.579521 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:51.579526 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:51.582304 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:52.079016 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:52.079048 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:52.079062 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:52.079069 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:52.081640 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:52.579431 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:52.579454 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:52.579463 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:52.579468 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:52.582094 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:53.078829 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:53.078852 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:53.078861 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:53.078866 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:53.081014 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:53.081094 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:53.578731 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:53.578760 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:53.578772 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:53.578777 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:53.581451 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:54.079202 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:54.079283 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:54.079297 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:54.079303 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:54.081344 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:54.579085 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:54.579110 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:54.579120 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:54.579126 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:54.581562 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:55.079630 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:55.079658 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:55.079667 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:55.079672 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:55.082071 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:55.082177 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:55.578792 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:55.578821 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:55.578834 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:55.578841 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:55.581614 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:56.079439 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:56.079468 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:56.079480 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:56.079487 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:56.082079 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:56.578744 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:56.578768 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:56.578777 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:56.578783 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:56.581460 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:57.079159 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:57.079195 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:57.079203 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:57.079209 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:57.081628 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:57.579411 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:57.579435 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:57.579444 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:57.579449 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:57.581739 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:57.581831 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:45:58.079577 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:58.079601 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:58.079609 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:58.079613 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:58.081689 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:58.579419 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:58.579445 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:58.579460 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:58.579465 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:58.582220 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:59.078945 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:59.078971 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:59.078979 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:59.078985 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:59.081071 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:45:59.578772 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:45:59.578796 1218338 round_trippers.go:476] Request Headers:
I0414 14:45:59.578804 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:45:59.578809 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:45:59.581379 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:00.079377 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:00.079409 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:00.079422 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:00.079430 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:00.081918 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:00.082009 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:00.579731 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:00.579760 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:00.579772 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:00.579781 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:00.582112 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:01.078818 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:01.078844 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:01.078853 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:01.078857 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:01.081380 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:01.579172 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:01.579199 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:01.579207 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:01.579212 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:01.581640 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:02.079421 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:02.079445 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:02.079462 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:02.079466 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:02.081987 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:02.082086 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:02.578816 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:02.578837 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:02.578845 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:02.578851 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:02.581179 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:03.078831 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:03.078856 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:03.078866 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:03.078870 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:03.081070 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:03.578760 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:03.578785 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:03.578793 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:03.578799 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:03.581079 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:04.078804 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:04.078829 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:04.078838 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:04.078843 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:04.081621 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:04.579458 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:04.579484 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:04.579496 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:04.579503 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:04.581667 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:04.581776 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:05.079610 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:05.079633 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:05.079642 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:05.079647 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:05.081848 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:05.579624 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:05.579654 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:05.579667 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:05.579672 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:05.582125 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:06.078809 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:06.078833 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:06.078842 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:06.078849 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:06.081663 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:06.579504 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:06.579530 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:06.579539 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:06.579544 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:06.582207 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:06.582299 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:07.078952 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:07.078979 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:07.078987 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:07.078991 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:07.081420 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:07.579095 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:07.579115 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:07.579123 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:07.579126 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:07.581100 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:46:08.079446 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:08.079482 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:08.079497 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:08.079503 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:08.086034 1218338 round_trippers.go:581] Response Status: 404 Not Found in 6 milliseconds
I0414 14:46:08.578766 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:08.578800 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:08.578815 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:08.578821 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:08.581247 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:09.079012 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:09.079038 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:09.079047 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:09.079050 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:09.082074 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:46:09.082157 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:09.578764 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:09.578788 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:09.578796 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:09.578800 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:09.581007 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:10.078925 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:10.078948 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:10.078957 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:10.078960 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:10.081183 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:10.578908 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:10.578932 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:10.578940 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:10.578946 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:10.581261 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:11.078982 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:11.079009 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:11.079017 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:11.079025 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:11.081376 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:11.579275 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:11.579300 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:11.579308 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:11.579317 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:11.582132 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:11.582213 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:12.078870 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:12.078896 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:12.078909 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:12.078913 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:12.081301 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:12.579074 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:12.579095 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:12.579103 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:12.579108 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:12.581572 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:13.079362 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:13.079387 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:13.079396 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:13.079400 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:13.081758 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:13.579532 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:13.579556 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:13.579564 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:13.579569 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:13.581809 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:14.079592 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:14.079614 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:14.079622 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:14.079625 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:14.081939 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:14.082019 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:14.579745 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:14.579767 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:14.579776 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:14.579780 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:14.582177 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:15.079356 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:15.079380 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:15.079389 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:15.079394 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:15.082393 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:15.579683 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:15.579705 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:15.579716 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:15.579722 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:15.582111 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:16.078825 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:16.078849 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:16.078858 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:16.078862 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:16.081059 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:16.578866 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:16.578890 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:16.578899 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:16.578904 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:16.581620 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:16.581711 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:17.079443 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:17.079467 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:17.079475 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:17.079480 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:17.082068 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:17.578777 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:17.578795 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:17.578803 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:17.578809 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:17.581158 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:18.078872 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:18.078897 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:18.078906 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:18.078909 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:18.081106 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:18.578795 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:18.578820 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:18.578829 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:18.578835 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:18.581366 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:19.079074 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:19.079099 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:19.079108 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:19.079112 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:19.081485 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:19.081574 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:19.579178 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:19.579202 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:19.579213 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:19.579219 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:19.581468 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:20.079648 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:20.079682 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:20.079695 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:20.079703 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:20.082112 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:20.578811 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:20.578834 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:20.578843 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:20.578849 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:20.581210 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:21.078981 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:21.079005 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:21.079014 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:21.079018 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:21.081412 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:21.579118 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:21.579143 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:21.579152 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:21.579155 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:21.581853 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:21.581953 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:22.079640 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:22.079664 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:22.079673 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:22.079678 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:22.081929 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:22.578720 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:22.578741 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:22.578749 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:22.578753 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:22.581247 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:23.079006 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:23.079033 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:23.079042 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:23.079046 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:23.081568 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:23.579444 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:23.579469 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:23.579479 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:23.579483 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:23.581636 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:24.079421 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:24.079446 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:24.079455 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:24.079462 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:24.082134 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:24.082259 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:24.578820 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:24.578842 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:24.578854 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:24.578859 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:24.581179 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:25.079311 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:25.079334 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:25.079342 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:25.079346 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:25.081849 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:25.579604 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:25.579631 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:25.579640 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:25.579645 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:25.581933 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:26.079737 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:26.079767 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:26.079778 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:26.079788 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:26.082491 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:26.082594 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:26.579382 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:26.579412 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:26.579426 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:26.579433 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:26.582213 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:27.078797 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:27.078823 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:27.078832 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:27.078836 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:27.081559 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:27.578841 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:27.578866 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:27.578875 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:27.578880 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:27.581554 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:28.079316 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:28.079355 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:28.079366 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:28.079370 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:28.081647 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:28.579478 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:28.579508 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:28.579520 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:28.579524 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:28.581935 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:28.582035 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:29.079745 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:29.079770 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:29.079779 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:29.079783 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:29.081806 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:29.579556 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:29.579580 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:29.579588 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:29.579593 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:29.582405 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:30.079635 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:30.079678 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:30.079693 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:30.079699 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:30.082126 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:30.578838 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:30.578866 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:30.578877 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:30.578883 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:30.581324 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:31.079087 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:31.079117 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:31.079130 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:31.079138 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:31.082301 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:46:31.082711 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:31.579484 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:31.579506 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:31.579515 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:31.579519 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:31.581904 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:32.079708 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:32.079736 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:32.079749 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:32.079755 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:32.082212 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:32.579044 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:32.579069 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:32.579078 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:32.579082 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:32.581299 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:33.079456 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:33.079492 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:33.079507 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:33.079512 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:33.082884 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:46:33.082992 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:33.579607 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:33.579648 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:33.579664 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:33.579678 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:33.582049 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:34.078906 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:34.078929 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:34.078937 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:34.078941 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:34.081554 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:34.579360 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:34.579391 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:34.579403 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:34.579411 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:34.581563 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:35.078950 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:35.078980 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:35.078990 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:35.078996 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:35.081202 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:35.578899 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:35.578923 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:35.578931 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:35.578936 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:35.581054 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:35.581195 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:36.078752 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:36.078777 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:36.078787 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:36.078792 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:36.081345 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:36.578952 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:36.578976 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:36.578986 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:36.578989 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:36.581259 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:37.078957 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:37.078986 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:37.078997 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:37.079002 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:37.081180 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:37.578927 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:37.578952 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:37.578960 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:37.578964 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:37.581461 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:37.581554 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:38.079166 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:38.079191 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:38.079199 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:38.079204 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:38.081811 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:38.579622 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:38.579647 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:38.579656 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:38.579664 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:38.581663 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:46:39.079468 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:39.079563 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:39.079585 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:39.079595 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:39.082161 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:39.578842 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:39.578867 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:39.578876 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:39.578881 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:39.581281 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:40.079285 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:40.079308 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:40.079317 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:40.079321 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:40.082103 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:40.082250 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:40.578968 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:40.578993 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:40.579002 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:40.579006 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:40.581322 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:41.079179 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:41.079204 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:41.079213 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:41.079217 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:41.081742 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:41.579419 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:41.579447 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:41.579458 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:41.579466 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:41.582050 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:42.078733 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:42.078758 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:42.078767 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:42.078772 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:42.081251 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:42.579068 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:42.579091 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:42.579100 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:42.579103 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:42.581362 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:42.581451 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:43.079076 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:43.079105 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:43.079118 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:43.079127 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:43.081532 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:43.579237 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:43.579278 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:43.579287 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:43.579290 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:43.581549 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:44.079360 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:44.079384 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:44.079392 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:44.079397 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:44.082428 1218338 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
I0414 14:46:44.579112 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:44.579139 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:44.579154 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:44.579161 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:44.581282 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:45.079275 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:45.079298 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:45.079307 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:45.079312 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:45.081654 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:45.081743 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:45.579435 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:45.579462 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:45.579471 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:45.579476 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:45.581843 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:46.079579 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:46.079600 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:46.079608 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:46.079612 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:46.081745 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:46.579474 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:46.579494 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:46.579502 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:46.579507 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:46.581721 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:47.079499 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:47.079521 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:47.079530 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:47.079533 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:47.081845 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:47.081944 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:47.579600 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:47.579625 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:47.579634 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:47.579639 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:47.582253 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:48.078953 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:48.078981 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:48.078991 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:48.078996 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:48.081360 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:48.579625 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:48.579648 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:48.579656 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:48.579660 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:48.581501 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:46:49.079215 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:49.079239 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:49.079248 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:49.079271 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:49.081572 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:49.579319 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:49.579342 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:49.579350 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:49.579356 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:49.581494 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:49.581570 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:50.079571 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:50.079593 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:50.079602 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:50.079608 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:50.082475 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:50.579237 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:50.579288 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:50.579301 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:50.579307 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:50.581489 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:51.079167 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:51.079190 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:51.079199 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:51.079203 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:51.081449 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:51.579115 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:51.579141 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:51.579150 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:51.579154 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:51.581715 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:51.581810 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:52.079437 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:52.079459 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:52.079468 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:52.079472 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:52.081907 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:52.578728 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:52.578751 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:52.578759 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:52.578765 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:52.580988 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:53.078740 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:53.078765 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:53.078775 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:53.078779 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:53.080944 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:53.579709 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:53.579734 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:53.579742 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:53.579747 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:53.582021 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:53.582120 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:54.078693 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:54.078716 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:54.078724 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:54.078730 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:54.080546 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:46:54.579360 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:54.579392 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:54.579405 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:54.579410 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:54.581948 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:55.078851 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:55.078879 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:55.078887 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:55.078891 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:55.080775 1218338 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
I0414 14:46:55.579563 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:55.579589 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:55.579601 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:55.579607 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:55.581769 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:56.079549 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:56.079581 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:56.079593 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:56.079599 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:56.081861 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:56.081957 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:56.579636 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:56.579668 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:56.579681 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:56.579687 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:56.581932 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:57.079728 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:57.079759 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:57.079768 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:57.079774 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:57.082018 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:57.578740 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:57.578771 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:57.578780 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:57.578785 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:57.580854 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:58.079678 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:58.079707 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:58.079720 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:58.079729 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:58.082050 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:58.082182 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:46:58.578756 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:58.578782 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:58.578791 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:58.578795 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:58.581066 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:59.078734 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:59.078757 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:59.078774 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:59.078780 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:59.081034 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:46:59.578721 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:46:59.578744 1218338 round_trippers.go:476] Request Headers:
I0414 14:46:59.578753 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:46:59.578763 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:46:59.580974 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:00.078969 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:00.078994 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:00.079003 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:00.079007 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:00.082007 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:00.578785 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:00.578811 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:00.578820 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:00.578824 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:00.581301 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:00.581380 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:47:01.079013 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:01.079039 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:01.079048 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:01.079055 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:01.081596 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:01.579408 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:01.579432 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:01.579440 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:01.579446 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:01.582056 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:02.078767 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:02.078793 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:02.078801 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:02.078808 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:02.081173 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:02.578977 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:02.579003 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:02.579011 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:02.579015 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:02.581152 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:03.078891 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:03.078916 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:03.078924 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:03.078929 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:03.081511 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:03.081590 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:47:03.579443 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:03.579467 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:03.579476 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:03.579480 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:03.581761 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:04.079644 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:04.079671 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:04.079682 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:04.079690 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:04.081905 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:04.579670 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:04.579694 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:04.579709 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:04.579714 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:04.581999 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:05.079060 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:05.079086 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:05.079103 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:05.079107 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:05.081556 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:05.081648 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:47:05.579331 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:05.579354 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:05.579363 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:05.579368 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:05.581999 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:06.078697 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:06.078722 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:06.078731 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:06.078735 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:06.081185 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:06.578765 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:06.578794 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:06.578803 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:06.578808 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:06.581239 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:07.078923 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:07.078947 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:07.078956 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:07.078960 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:07.081383 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:07.579060 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:07.579084 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:07.579093 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:07.579099 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:07.581293 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:07.581365 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:47:08.079011 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:08.079038 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:08.079047 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:08.079060 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:08.081779 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:08.579583 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:08.579610 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:08.579619 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:08.579625 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:08.582126 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:09.078871 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:09.078899 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:09.078913 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:09.078919 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:09.081422 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:09.579113 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:09.579136 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:09.579145 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:09.579149 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:09.581492 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:09.581590 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:47:10.079561 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:10.079590 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:10.079599 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:10.079604 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:10.082013 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:10.578729 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:10.578757 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:10.578766 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:10.578770 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:10.581017 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:11.078744 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:11.078777 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:11.078790 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:11.078796 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:11.081259 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:11.579039 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:11.579066 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:11.579078 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:11.579096 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:11.581697 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:11.581796 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:47:12.079504 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:12.079534 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:12.079548 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:12.079553 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:12.081953 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:12.578813 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:12.578844 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:12.578853 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:12.578859 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:12.581214 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:13.078982 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:13.079005 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:13.079014 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:13.079019 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:13.081374 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:13.579108 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:13.579133 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:13.579142 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:13.579146 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:13.581434 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:14.079120 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:14.079142 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:14.079151 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:14.079156 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:14.081523 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:14.081627 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:47:14.579217 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:14.579243 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:14.579280 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:14.579288 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:14.581917 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:15.078794 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:15.078818 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:15.078828 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:15.078833 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:15.081278 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:15.578968 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:15.578991 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:15.579000 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:15.579005 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:15.581366 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:16.079084 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:16.079108 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:16.079116 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:16.079123 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:16.081350 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:16.579152 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:16.579185 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:16.579194 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:16.579199 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:16.581334 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:16.581409 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:47:17.079084 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:17.079110 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:17.079119 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:17.079124 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:17.081457 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:17.579167 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:17.579196 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:17.579205 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:17.579209 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:17.581770 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:18.079690 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:18.079724 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:18.079738 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:18.079744 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:18.082163 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:18.578917 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:18.578947 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:18.578956 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:18.578961 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:18.581342 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:18.581438 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:47:19.079038 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:19.079064 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:19.079073 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:19.079077 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:19.081387 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:19.579101 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:19.579128 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:19.579140 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:19.579145 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:19.581661 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:20.079723 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:20.079749 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:20.079760 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:20.079766 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:20.082083 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:20.578795 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:20.578820 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:20.578828 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:20.578834 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:20.581407 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:20.581516 1218338 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
I0414 14:47:21.078838 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:21.078863 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:21.078871 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:21.078877 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:21.081177 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:21.578996 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:21.579022 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:21.579032 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:21.579039 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:21.581640 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:22.079506 1218338 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
I0414 14:47:22.079533 1218338 round_trippers.go:476] Request Headers:
I0414 14:47:22.079542 1218338 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
I0414 14:47:22.079547 1218338 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0414 14:47:22.082119 1218338 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
I0414 14:47:22.578876 1218338 node_ready.go:38] duration metric: took 4m0.000340348s for node "ha-290859-m02" to be "Ready" ...
I0414 14:47:22.581156 1218338 out.go:201] 
W0414 14:47:22.582560 1218338 out.go:270] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
W0414 14:47:22.582578 1218338 out.go:270] * 
* 
W0414 14:47:22.586748 1218338 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0414 14:47:22.588090 1218338 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-290859 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (605.627925ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:47:22.845277 1219310 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:47:22.845418 1219310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:22.845427 1219310 out.go:358] Setting ErrFile to fd 2...
	I0414 14:47:22.845431 1219310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:22.845609 1219310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:47:22.845783 1219310 out.go:352] Setting JSON to false
	I0414 14:47:22.845818 1219310 mustload.go:65] Loading cluster: ha-290859
	I0414 14:47:22.845938 1219310 notify.go:220] Checking for updates...
	I0414 14:47:22.846203 1219310 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:47:22.846233 1219310 status.go:174] checking status of ha-290859 ...
	I0414 14:47:22.846672 1219310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:22.846724 1219310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:22.869359 1219310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34235
	I0414 14:47:22.869885 1219310 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:22.870475 1219310 main.go:141] libmachine: Using API Version  1
	I0414 14:47:22.870507 1219310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:22.870895 1219310 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:22.871091 1219310 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:47:22.872941 1219310 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:47:22.872959 1219310 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:22.873279 1219310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:22.873322 1219310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:22.888737 1219310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0414 14:47:22.889255 1219310 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:22.889740 1219310 main.go:141] libmachine: Using API Version  1
	I0414 14:47:22.889771 1219310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:22.890174 1219310 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:22.890406 1219310 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:47:22.893361 1219310 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:22.893789 1219310 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:22.893819 1219310 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:22.893951 1219310 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:22.894265 1219310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:22.894331 1219310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:22.909259 1219310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0414 14:47:22.909759 1219310 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:22.910272 1219310 main.go:141] libmachine: Using API Version  1
	I0414 14:47:22.910293 1219310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:22.910673 1219310 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:22.910877 1219310 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:47:22.911108 1219310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:22.911132 1219310 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:47:22.913588 1219310 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:22.914027 1219310 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:22.914053 1219310 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:22.914156 1219310 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:47:22.914316 1219310 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:47:22.914443 1219310 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:47:22.914629 1219310 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:47:22.999423 1219310 ssh_runner.go:195] Run: systemctl --version
	I0414 14:47:23.006901 1219310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:23.025010 1219310 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:23.025056 1219310 api_server.go:166] Checking apiserver status ...
	I0414 14:47:23.025099 1219310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:47:23.039599 1219310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:47:23.049339 1219310 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:23.049406 1219310 ssh_runner.go:195] Run: ls
	I0414 14:47:23.054363 1219310 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:47:23.059780 1219310 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:47:23.059803 1219310 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:47:23.059814 1219310 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:23.059830 1219310 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:47:23.060153 1219310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:23.060207 1219310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:23.075445 1219310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0414 14:47:23.075929 1219310 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:23.076382 1219310 main.go:141] libmachine: Using API Version  1
	I0414 14:47:23.076404 1219310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:23.076755 1219310 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:23.076963 1219310 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:47:23.078626 1219310 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:47:23.078644 1219310 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:23.079063 1219310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:23.079112 1219310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:23.094364 1219310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I0414 14:47:23.094802 1219310 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:23.095311 1219310 main.go:141] libmachine: Using API Version  1
	I0414 14:47:23.095332 1219310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:23.095701 1219310 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:23.095964 1219310 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:47:23.099131 1219310 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:23.099592 1219310 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:23.099617 1219310 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:23.099753 1219310 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:23.100065 1219310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:23.100103 1219310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:23.115897 1219310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0414 14:47:23.116310 1219310 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:23.116806 1219310 main.go:141] libmachine: Using API Version  1
	I0414 14:47:23.116823 1219310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:23.117184 1219310 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:23.117372 1219310 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:47:23.117581 1219310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:23.117607 1219310 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:47:23.120427 1219310 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:23.120826 1219310 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:23.120845 1219310 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:23.120959 1219310 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:47:23.121205 1219310 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:47:23.121399 1219310 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:47:23.121614 1219310 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:47:23.203399 1219310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:23.218849 1219310 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:23.218882 1219310 api_server.go:166] Checking apiserver status ...
	I0414 14:47:23.218924 1219310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:47:23.230635 1219310 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:23.230666 1219310 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:47:23.230679 1219310 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:23.230703 1219310 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:47:23.231036 1219310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:23.231084 1219310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:23.247308 1219310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I0414 14:47:23.247857 1219310 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:23.248379 1219310 main.go:141] libmachine: Using API Version  1
	I0414 14:47:23.248404 1219310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:23.248819 1219310 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:23.249090 1219310 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:47:23.250765 1219310 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:47:23.250782 1219310 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:23.251091 1219310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:23.251131 1219310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:23.269801 1219310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41337
	I0414 14:47:23.270292 1219310 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:23.270824 1219310 main.go:141] libmachine: Using API Version  1
	I0414 14:47:23.270849 1219310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:23.271244 1219310 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:23.271437 1219310 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:47:23.274244 1219310 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:23.274691 1219310 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:23.274723 1219310 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:23.274829 1219310 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:23.275306 1219310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:23.275358 1219310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:23.291832 1219310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0414 14:47:23.292385 1219310 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:23.292883 1219310 main.go:141] libmachine: Using API Version  1
	I0414 14:47:23.292920 1219310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:23.293312 1219310 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:23.293479 1219310 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:47:23.293655 1219310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:23.293677 1219310 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:47:23.296349 1219310 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:23.296872 1219310 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:23.296928 1219310 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:23.297082 1219310 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:47:23.297213 1219310 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:47:23.297377 1219310 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:47:23.297510 1219310 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:47:23.379300 1219310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:23.396876 1219310 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0414 14:47:23.403625 1203639 retry.go:31] will retry after 1.098939298s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (594.398602ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:47:24.546134 1219376 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:47:24.546367 1219376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:24.546375 1219376 out.go:358] Setting ErrFile to fd 2...
	I0414 14:47:24.546379 1219376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:24.546551 1219376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:47:24.546698 1219376 out.go:352] Setting JSON to false
	I0414 14:47:24.546731 1219376 mustload.go:65] Loading cluster: ha-290859
	I0414 14:47:24.546832 1219376 notify.go:220] Checking for updates...
	I0414 14:47:24.547151 1219376 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:47:24.547182 1219376 status.go:174] checking status of ha-290859 ...
	I0414 14:47:24.547711 1219376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:24.547786 1219376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:24.563942 1219376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0414 14:47:24.564588 1219376 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:24.565299 1219376 main.go:141] libmachine: Using API Version  1
	I0414 14:47:24.565324 1219376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:24.565919 1219376 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:24.566155 1219376 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:47:24.567901 1219376 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:47:24.567921 1219376 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:24.568264 1219376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:24.568317 1219376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:24.584114 1219376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44115
	I0414 14:47:24.584564 1219376 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:24.585044 1219376 main.go:141] libmachine: Using API Version  1
	I0414 14:47:24.585067 1219376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:24.585435 1219376 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:24.585625 1219376 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:47:24.588705 1219376 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:24.589128 1219376 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:24.589156 1219376 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:24.589271 1219376 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:24.589619 1219376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:24.589671 1219376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:24.605892 1219376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41073
	I0414 14:47:24.606438 1219376 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:24.606894 1219376 main.go:141] libmachine: Using API Version  1
	I0414 14:47:24.606933 1219376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:24.607315 1219376 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:24.607514 1219376 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:47:24.607758 1219376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:24.607789 1219376 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:47:24.610334 1219376 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:24.610727 1219376 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:24.610764 1219376 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:24.610902 1219376 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:47:24.611078 1219376 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:47:24.611247 1219376 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:47:24.611398 1219376 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:47:24.694935 1219376 ssh_runner.go:195] Run: systemctl --version
	I0414 14:47:24.701857 1219376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:24.718421 1219376 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:24.718480 1219376 api_server.go:166] Checking apiserver status ...
	I0414 14:47:24.718521 1219376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:47:24.734473 1219376 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:47:24.750571 1219376 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:24.750634 1219376 ssh_runner.go:195] Run: ls
	I0414 14:47:24.759051 1219376 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:47:24.763033 1219376 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:47:24.763065 1219376 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:47:24.763079 1219376 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:24.763101 1219376 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:47:24.763552 1219376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:24.763610 1219376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:24.779502 1219376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
	I0414 14:47:24.780025 1219376 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:24.780601 1219376 main.go:141] libmachine: Using API Version  1
	I0414 14:47:24.780626 1219376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:24.781052 1219376 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:24.781309 1219376 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:47:24.782713 1219376 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:47:24.782733 1219376 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:24.783027 1219376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:24.783071 1219376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:24.798592 1219376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I0414 14:47:24.799086 1219376 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:24.799579 1219376 main.go:141] libmachine: Using API Version  1
	I0414 14:47:24.799606 1219376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:24.799972 1219376 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:24.800195 1219376 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:47:24.803422 1219376 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:24.803855 1219376 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:24.803884 1219376 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:24.803999 1219376 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:24.804331 1219376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:24.804378 1219376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:24.820126 1219376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I0414 14:47:24.820758 1219376 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:24.821225 1219376 main.go:141] libmachine: Using API Version  1
	I0414 14:47:24.821250 1219376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:24.821584 1219376 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:24.821762 1219376 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:47:24.821981 1219376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:24.822005 1219376 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:47:24.824691 1219376 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:24.825174 1219376 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:24.825199 1219376 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:24.825324 1219376 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:47:24.825513 1219376 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:47:24.825687 1219376 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:47:24.825866 1219376 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:47:24.902336 1219376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:24.919547 1219376 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:24.919578 1219376 api_server.go:166] Checking apiserver status ...
	I0414 14:47:24.919614 1219376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:47:24.931340 1219376 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:24.931372 1219376 status.go:463] ha-290859-m02 apiserver status = Running (err=<nil>)
	I0414 14:47:24.931385 1219376 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:24.931404 1219376 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:47:24.931752 1219376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:24.931793 1219376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:24.953421 1219376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0414 14:47:24.953919 1219376 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:24.954361 1219376 main.go:141] libmachine: Using API Version  1
	I0414 14:47:24.954384 1219376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:24.954815 1219376 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:24.955036 1219376 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:47:24.956773 1219376 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:47:24.956795 1219376 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:24.957208 1219376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:24.957267 1219376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:24.972785 1219376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43535
	I0414 14:47:24.973318 1219376 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:24.973806 1219376 main.go:141] libmachine: Using API Version  1
	I0414 14:47:24.973839 1219376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:24.974236 1219376 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:24.974428 1219376 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:47:24.977220 1219376 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:24.977731 1219376 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:24.977764 1219376 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:24.977893 1219376 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:24.978291 1219376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:24.978348 1219376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:24.994142 1219376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36183
	I0414 14:47:24.994607 1219376 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:24.995048 1219376 main.go:141] libmachine: Using API Version  1
	I0414 14:47:24.995070 1219376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:24.995457 1219376 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:24.995682 1219376 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:47:24.995907 1219376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:24.995934 1219376 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:47:24.998726 1219376 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:24.999139 1219376 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:24.999173 1219376 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:24.999424 1219376 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:47:24.999589 1219376 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:47:24.999727 1219376 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:47:24.999884 1219376 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:47:25.078657 1219376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:25.092043 1219376 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0414 14:47:25.098584 1203639 retry.go:31] will retry after 1.591584721s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (578.925497ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:47:26.732888 1219445 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:47:26.733034 1219445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:26.733040 1219445 out.go:358] Setting ErrFile to fd 2...
	I0414 14:47:26.733044 1219445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:26.733263 1219445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:47:26.733426 1219445 out.go:352] Setting JSON to false
	I0414 14:47:26.733457 1219445 mustload.go:65] Loading cluster: ha-290859
	I0414 14:47:26.733574 1219445 notify.go:220] Checking for updates...
	I0414 14:47:26.733839 1219445 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:47:26.733862 1219445 status.go:174] checking status of ha-290859 ...
	I0414 14:47:26.734284 1219445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:26.734332 1219445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:26.751553 1219445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40789
	I0414 14:47:26.752126 1219445 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:26.752787 1219445 main.go:141] libmachine: Using API Version  1
	I0414 14:47:26.752816 1219445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:26.753214 1219445 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:26.753441 1219445 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:47:26.755203 1219445 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:47:26.755228 1219445 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:26.755621 1219445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:26.755680 1219445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:26.775041 1219445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0414 14:47:26.775608 1219445 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:26.776200 1219445 main.go:141] libmachine: Using API Version  1
	I0414 14:47:26.776219 1219445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:26.776654 1219445 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:26.776912 1219445 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:47:26.779941 1219445 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:26.780367 1219445 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:26.780392 1219445 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:26.780515 1219445 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:26.780860 1219445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:26.780916 1219445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:26.797312 1219445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33627
	I0414 14:47:26.797795 1219445 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:26.798356 1219445 main.go:141] libmachine: Using API Version  1
	I0414 14:47:26.798386 1219445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:26.798783 1219445 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:26.798960 1219445 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:47:26.799228 1219445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:26.799270 1219445 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:47:26.802861 1219445 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:26.803341 1219445 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:26.803366 1219445 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:26.803565 1219445 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:47:26.803780 1219445 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:47:26.803954 1219445 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:47:26.804116 1219445 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:47:26.886711 1219445 ssh_runner.go:195] Run: systemctl --version
	I0414 14:47:26.893088 1219445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:26.907955 1219445 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:26.907999 1219445 api_server.go:166] Checking apiserver status ...
	I0414 14:47:26.908035 1219445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:47:26.921870 1219445 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:47:26.931813 1219445 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:26.931882 1219445 ssh_runner.go:195] Run: ls
	I0414 14:47:26.936280 1219445 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:47:26.940458 1219445 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:47:26.940483 1219445 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:47:26.940495 1219445 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:26.940512 1219445 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:47:26.940814 1219445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:26.940850 1219445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:26.956802 1219445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0414 14:47:26.957346 1219445 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:26.957850 1219445 main.go:141] libmachine: Using API Version  1
	I0414 14:47:26.957876 1219445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:26.958252 1219445 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:26.958458 1219445 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:47:26.960210 1219445 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:47:26.960232 1219445 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:26.960528 1219445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:26.960565 1219445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:26.976543 1219445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I0414 14:47:26.977111 1219445 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:26.977751 1219445 main.go:141] libmachine: Using API Version  1
	I0414 14:47:26.977776 1219445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:26.978218 1219445 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:26.978430 1219445 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:47:26.981083 1219445 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:26.981476 1219445 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:26.981510 1219445 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:26.981665 1219445 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:26.981968 1219445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:26.982027 1219445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:26.997342 1219445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I0414 14:47:26.997846 1219445 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:26.998341 1219445 main.go:141] libmachine: Using API Version  1
	I0414 14:47:26.998364 1219445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:26.998714 1219445 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:26.998887 1219445 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:47:26.999081 1219445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:26.999106 1219445 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:47:27.001890 1219445 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:27.002375 1219445 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:27.002406 1219445 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:27.002572 1219445 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:47:27.002734 1219445 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:47:27.002920 1219445 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:47:27.003082 1219445 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:47:27.082520 1219445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:27.096554 1219445 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:27.096588 1219445 api_server.go:166] Checking apiserver status ...
	I0414 14:47:27.096629 1219445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:47:27.108476 1219445 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:27.108500 1219445 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:47:27.108509 1219445 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:27.108528 1219445 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:47:27.108859 1219445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:27.108905 1219445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:27.124729 1219445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I0414 14:47:27.125242 1219445 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:27.125732 1219445 main.go:141] libmachine: Using API Version  1
	I0414 14:47:27.125757 1219445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:27.126164 1219445 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:27.126492 1219445 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:47:27.128073 1219445 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:47:27.128111 1219445 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:27.128418 1219445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:27.128471 1219445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:27.144396 1219445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0414 14:47:27.144863 1219445 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:27.145298 1219445 main.go:141] libmachine: Using API Version  1
	I0414 14:47:27.145329 1219445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:27.145678 1219445 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:27.145856 1219445 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:47:27.148554 1219445 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:27.148975 1219445 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:27.149015 1219445 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:27.149148 1219445 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:27.149559 1219445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:27.149610 1219445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:27.165498 1219445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33889
	I0414 14:47:27.166105 1219445 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:27.166630 1219445 main.go:141] libmachine: Using API Version  1
	I0414 14:47:27.166655 1219445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:27.167029 1219445 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:27.167211 1219445 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:47:27.167448 1219445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:27.167476 1219445 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:47:27.170323 1219445 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:27.170758 1219445 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:27.170793 1219445 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:27.170921 1219445 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:47:27.171108 1219445 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:47:27.171294 1219445 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:47:27.171454 1219445 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:47:27.250962 1219445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:27.264040 1219445 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0414 14:47:27.270487 1203639 retry.go:31] will retry after 2.661539999s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (575.315129ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:47:29.975300 1219527 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:47:29.975429 1219527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:29.975440 1219527 out.go:358] Setting ErrFile to fd 2...
	I0414 14:47:29.975446 1219527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:29.975646 1219527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:47:29.975874 1219527 out.go:352] Setting JSON to false
	I0414 14:47:29.975909 1219527 mustload.go:65] Loading cluster: ha-290859
	I0414 14:47:29.976026 1219527 notify.go:220] Checking for updates...
	I0414 14:47:29.976415 1219527 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:47:29.976449 1219527 status.go:174] checking status of ha-290859 ...
	I0414 14:47:29.976923 1219527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:29.977008 1219527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:29.993879 1219527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
	I0414 14:47:29.994462 1219527 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:29.995149 1219527 main.go:141] libmachine: Using API Version  1
	I0414 14:47:29.995188 1219527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:29.995739 1219527 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:29.996294 1219527 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:47:29.998141 1219527 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:47:29.998166 1219527 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:29.998505 1219527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:29.998555 1219527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:30.015133 1219527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39957
	I0414 14:47:30.015590 1219527 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:30.016104 1219527 main.go:141] libmachine: Using API Version  1
	I0414 14:47:30.016126 1219527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:30.016489 1219527 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:30.016718 1219527 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:47:30.019772 1219527 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:30.020169 1219527 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:30.020207 1219527 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:30.020329 1219527 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:30.020626 1219527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:30.020664 1219527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:30.036689 1219527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36277
	I0414 14:47:30.037125 1219527 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:30.037572 1219527 main.go:141] libmachine: Using API Version  1
	I0414 14:47:30.037592 1219527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:30.037972 1219527 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:30.038143 1219527 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:47:30.038342 1219527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:30.038365 1219527 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:47:30.041362 1219527 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:30.041825 1219527 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:30.041847 1219527 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:30.041997 1219527 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:47:30.042191 1219527 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:47:30.042354 1219527 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:47:30.042550 1219527 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:47:30.127507 1219527 ssh_runner.go:195] Run: systemctl --version
	I0414 14:47:30.132873 1219527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:30.154354 1219527 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:30.154409 1219527 api_server.go:166] Checking apiserver status ...
	I0414 14:47:30.154451 1219527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:47:30.167584 1219527 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:47:30.176720 1219527 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:30.176792 1219527 ssh_runner.go:195] Run: ls
	I0414 14:47:30.180608 1219527 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:47:30.185180 1219527 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:47:30.185203 1219527 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:47:30.185214 1219527 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:30.185232 1219527 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:47:30.185563 1219527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:30.185612 1219527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:30.201354 1219527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0414 14:47:30.201818 1219527 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:30.202235 1219527 main.go:141] libmachine: Using API Version  1
	I0414 14:47:30.202259 1219527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:30.202649 1219527 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:30.202834 1219527 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:47:30.204339 1219527 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:47:30.204360 1219527 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:30.204784 1219527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:30.204832 1219527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:30.221390 1219527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I0414 14:47:30.221904 1219527 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:30.222441 1219527 main.go:141] libmachine: Using API Version  1
	I0414 14:47:30.222475 1219527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:30.222945 1219527 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:30.223193 1219527 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:47:30.225881 1219527 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:30.226369 1219527 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:30.226401 1219527 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:30.226575 1219527 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:30.227038 1219527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:30.227094 1219527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:30.243502 1219527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0414 14:47:30.244027 1219527 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:30.244520 1219527 main.go:141] libmachine: Using API Version  1
	I0414 14:47:30.244542 1219527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:30.245019 1219527 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:30.245227 1219527 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:47:30.245503 1219527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:30.245522 1219527 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:47:30.248498 1219527 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:30.248923 1219527 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:30.248958 1219527 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:30.249113 1219527 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:47:30.249299 1219527 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:47:30.249437 1219527 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:47:30.249553 1219527 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:47:30.326272 1219527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:30.340677 1219527 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:30.340708 1219527 api_server.go:166] Checking apiserver status ...
	I0414 14:47:30.340739 1219527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:47:30.352561 1219527 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:30.352592 1219527 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:47:30.352605 1219527 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:30.352636 1219527 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:47:30.353002 1219527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:30.353053 1219527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:30.368927 1219527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42973
	I0414 14:47:30.369376 1219527 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:30.369848 1219527 main.go:141] libmachine: Using API Version  1
	I0414 14:47:30.369871 1219527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:30.370239 1219527 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:30.370455 1219527 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:47:30.372007 1219527 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:47:30.372027 1219527 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:30.372333 1219527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:30.372373 1219527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:30.387447 1219527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0414 14:47:30.388000 1219527 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:30.388417 1219527 main.go:141] libmachine: Using API Version  1
	I0414 14:47:30.388437 1219527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:30.388768 1219527 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:30.388964 1219527 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:47:30.391513 1219527 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:30.391911 1219527 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:30.391939 1219527 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:30.392103 1219527 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:30.392515 1219527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:30.392563 1219527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:30.407883 1219527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0414 14:47:30.408272 1219527 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:30.408742 1219527 main.go:141] libmachine: Using API Version  1
	I0414 14:47:30.408761 1219527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:30.409088 1219527 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:30.409263 1219527 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:47:30.409418 1219527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:30.409443 1219527 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:47:30.412088 1219527 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:30.412498 1219527 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:30.412526 1219527 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:30.412716 1219527 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:47:30.412876 1219527 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:47:30.413042 1219527 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:47:30.413189 1219527 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:47:30.489941 1219527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:30.502438 1219527 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0414 14:47:30.508708 1203639 retry.go:31] will retry after 4.009009481s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (590.066916ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:47:34.562761 1219609 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:47:34.563039 1219609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:34.563051 1219609 out.go:358] Setting ErrFile to fd 2...
	I0414 14:47:34.563056 1219609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:34.563292 1219609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:47:34.563509 1219609 out.go:352] Setting JSON to false
	I0414 14:47:34.563553 1219609 mustload.go:65] Loading cluster: ha-290859
	I0414 14:47:34.563593 1219609 notify.go:220] Checking for updates...
	I0414 14:47:34.563977 1219609 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:47:34.564003 1219609 status.go:174] checking status of ha-290859 ...
	I0414 14:47:34.564432 1219609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:34.564494 1219609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:34.581296 1219609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40815
	I0414 14:47:34.581920 1219609 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:34.582565 1219609 main.go:141] libmachine: Using API Version  1
	I0414 14:47:34.582594 1219609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:34.582962 1219609 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:34.583176 1219609 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:47:34.584754 1219609 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:47:34.584775 1219609 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:34.585208 1219609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:34.585289 1219609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:34.600531 1219609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0414 14:47:34.600990 1219609 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:34.601369 1219609 main.go:141] libmachine: Using API Version  1
	I0414 14:47:34.601387 1219609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:34.601728 1219609 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:34.601912 1219609 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:47:34.604672 1219609 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:34.605055 1219609 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:34.605093 1219609 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:34.605211 1219609 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:34.605508 1219609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:34.605548 1219609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:34.621911 1219609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43053
	I0414 14:47:34.622337 1219609 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:34.622779 1219609 main.go:141] libmachine: Using API Version  1
	I0414 14:47:34.622805 1219609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:34.623152 1219609 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:34.623346 1219609 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:47:34.623531 1219609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:34.623553 1219609 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:47:34.626364 1219609 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:34.626798 1219609 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:34.626825 1219609 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:34.627008 1219609 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:47:34.627182 1219609 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:47:34.627333 1219609 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:47:34.627467 1219609 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:47:34.712615 1219609 ssh_runner.go:195] Run: systemctl --version
	I0414 14:47:34.719947 1219609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:34.739950 1219609 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:34.740010 1219609 api_server.go:166] Checking apiserver status ...
	I0414 14:47:34.740058 1219609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:47:34.756141 1219609 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:47:34.765598 1219609 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:34.765670 1219609 ssh_runner.go:195] Run: ls
	I0414 14:47:34.769574 1219609 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:47:34.774084 1219609 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:47:34.774120 1219609 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:47:34.774136 1219609 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:34.774157 1219609 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:47:34.774447 1219609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:34.774517 1219609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:34.791990 1219609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38957
	I0414 14:47:34.792601 1219609 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:34.793127 1219609 main.go:141] libmachine: Using API Version  1
	I0414 14:47:34.793146 1219609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:34.793477 1219609 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:34.793669 1219609 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:47:34.795210 1219609 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:47:34.795227 1219609 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:34.795547 1219609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:34.795587 1219609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:34.811502 1219609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0414 14:47:34.812015 1219609 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:34.812551 1219609 main.go:141] libmachine: Using API Version  1
	I0414 14:47:34.812573 1219609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:34.812867 1219609 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:34.813092 1219609 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:47:34.816247 1219609 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:34.816857 1219609 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:34.816883 1219609 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:34.817099 1219609 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:34.817496 1219609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:34.817561 1219609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:34.834356 1219609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0414 14:47:34.834826 1219609 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:34.835377 1219609 main.go:141] libmachine: Using API Version  1
	I0414 14:47:34.835401 1219609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:34.835778 1219609 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:34.835997 1219609 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:47:34.836191 1219609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:34.836215 1219609 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:47:34.839560 1219609 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:34.840073 1219609 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:34.840103 1219609 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:34.840230 1219609 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:47:34.840417 1219609 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:47:34.840581 1219609 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:47:34.840720 1219609 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:47:34.918010 1219609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:34.931182 1219609 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:34.931218 1219609 api_server.go:166] Checking apiserver status ...
	I0414 14:47:34.931285 1219609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:47:34.942612 1219609 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:34.942641 1219609 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:47:34.942653 1219609 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:34.942668 1219609 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:47:34.942970 1219609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:34.943011 1219609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:34.959364 1219609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0414 14:47:34.959907 1219609 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:34.960370 1219609 main.go:141] libmachine: Using API Version  1
	I0414 14:47:34.960401 1219609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:34.960749 1219609 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:34.960965 1219609 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:47:34.962397 1219609 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:47:34.962416 1219609 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:34.962695 1219609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:34.962730 1219609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:34.979324 1219609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34457
	I0414 14:47:34.979846 1219609 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:34.980383 1219609 main.go:141] libmachine: Using API Version  1
	I0414 14:47:34.980404 1219609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:34.980764 1219609 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:34.980941 1219609 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:47:34.983868 1219609 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:34.984336 1219609 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:34.984372 1219609 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:34.984445 1219609 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:34.984743 1219609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:34.984792 1219609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:35.000036 1219609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33107
	I0414 14:47:35.000585 1219609 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:35.001157 1219609 main.go:141] libmachine: Using API Version  1
	I0414 14:47:35.001180 1219609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:35.001560 1219609 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:35.001737 1219609 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:47:35.001924 1219609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:35.001958 1219609 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:47:35.005047 1219609 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:35.005663 1219609 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:35.005695 1219609 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:35.005849 1219609 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:47:35.006057 1219609 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:47:35.006205 1219609 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:47:35.006386 1219609 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:47:35.087564 1219609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:35.103858 1219609 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0414 14:47:35.110600 1203639 retry.go:31] will retry after 5.216737727s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (598.307835ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:47:40.373685 1219691 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:47:40.374004 1219691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:40.374027 1219691 out.go:358] Setting ErrFile to fd 2...
	I0414 14:47:40.374035 1219691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:40.374328 1219691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:47:40.374517 1219691 out.go:352] Setting JSON to false
	I0414 14:47:40.374569 1219691 mustload.go:65] Loading cluster: ha-290859
	I0414 14:47:40.374708 1219691 notify.go:220] Checking for updates...
	I0414 14:47:40.375181 1219691 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:47:40.375220 1219691 status.go:174] checking status of ha-290859 ...
	I0414 14:47:40.375844 1219691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:40.375901 1219691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:40.392898 1219691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0414 14:47:40.393411 1219691 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:40.394133 1219691 main.go:141] libmachine: Using API Version  1
	I0414 14:47:40.394176 1219691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:40.394615 1219691 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:40.394824 1219691 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:47:40.396775 1219691 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:47:40.396794 1219691 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:40.397132 1219691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:40.397188 1219691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:40.414121 1219691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0414 14:47:40.414592 1219691 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:40.415057 1219691 main.go:141] libmachine: Using API Version  1
	I0414 14:47:40.415087 1219691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:40.415521 1219691 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:40.415796 1219691 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:47:40.419248 1219691 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:40.419779 1219691 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:40.419805 1219691 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:40.420028 1219691 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:40.420465 1219691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:40.420516 1219691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:40.436385 1219691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I0414 14:47:40.436986 1219691 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:40.437447 1219691 main.go:141] libmachine: Using API Version  1
	I0414 14:47:40.437472 1219691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:40.437821 1219691 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:40.438067 1219691 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:47:40.438285 1219691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:40.438313 1219691 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:47:40.441849 1219691 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:40.442349 1219691 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:40.442372 1219691 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:40.442556 1219691 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:47:40.442744 1219691 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:47:40.442915 1219691 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:47:40.443134 1219691 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:47:40.530827 1219691 ssh_runner.go:195] Run: systemctl --version
	I0414 14:47:40.537126 1219691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:40.554190 1219691 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:40.554235 1219691 api_server.go:166] Checking apiserver status ...
	I0414 14:47:40.554272 1219691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:47:40.567685 1219691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:47:40.578111 1219691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:40.578196 1219691 ssh_runner.go:195] Run: ls
	I0414 14:47:40.582605 1219691 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:47:40.587284 1219691 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:47:40.587318 1219691 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:47:40.587332 1219691 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:40.587348 1219691 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:47:40.587648 1219691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:40.587687 1219691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:40.604044 1219691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0414 14:47:40.604531 1219691 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:40.605041 1219691 main.go:141] libmachine: Using API Version  1
	I0414 14:47:40.605063 1219691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:40.605428 1219691 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:40.605684 1219691 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:47:40.607292 1219691 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:47:40.607312 1219691 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:40.607632 1219691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:40.607712 1219691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:40.623341 1219691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0414 14:47:40.623812 1219691 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:40.624334 1219691 main.go:141] libmachine: Using API Version  1
	I0414 14:47:40.624355 1219691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:40.624760 1219691 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:40.624985 1219691 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:47:40.628177 1219691 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:40.628618 1219691 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:40.628637 1219691 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:40.628741 1219691 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:40.629056 1219691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:40.629097 1219691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:40.645624 1219691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I0414 14:47:40.646092 1219691 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:40.646504 1219691 main.go:141] libmachine: Using API Version  1
	I0414 14:47:40.646528 1219691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:40.646904 1219691 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:40.647154 1219691 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:47:40.647438 1219691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:40.647465 1219691 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:47:40.650278 1219691 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:40.650658 1219691 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:40.650691 1219691 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:40.650864 1219691 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:47:40.651126 1219691 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:47:40.651336 1219691 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:47:40.651465 1219691 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:47:40.738672 1219691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:40.753704 1219691 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:40.753733 1219691 api_server.go:166] Checking apiserver status ...
	I0414 14:47:40.753767 1219691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:47:40.766973 1219691 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:40.767000 1219691 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:47:40.767011 1219691 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:40.767026 1219691 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:47:40.767361 1219691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:40.767411 1219691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:40.783937 1219691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40285
	I0414 14:47:40.784550 1219691 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:40.785270 1219691 main.go:141] libmachine: Using API Version  1
	I0414 14:47:40.785300 1219691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:40.785673 1219691 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:40.785866 1219691 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:47:40.787384 1219691 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:47:40.787405 1219691 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:40.787716 1219691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:40.787752 1219691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:40.802829 1219691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44199
	I0414 14:47:40.803384 1219691 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:40.803852 1219691 main.go:141] libmachine: Using API Version  1
	I0414 14:47:40.803874 1219691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:40.804213 1219691 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:40.804433 1219691 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:47:40.807037 1219691 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:40.807439 1219691 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:40.807467 1219691 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:40.807613 1219691 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:40.808072 1219691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:40.808129 1219691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:40.823300 1219691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35507
	I0414 14:47:40.823810 1219691 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:40.824266 1219691 main.go:141] libmachine: Using API Version  1
	I0414 14:47:40.824281 1219691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:40.824695 1219691 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:40.824886 1219691 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:47:40.825098 1219691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:40.825125 1219691 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:47:40.828246 1219691 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:40.828722 1219691 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:40.828760 1219691 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:40.828969 1219691 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:47:40.829225 1219691 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:47:40.829382 1219691 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:47:40.829534 1219691 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:47:40.906605 1219691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:40.920795 1219691 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0414 14:47:40.927309 1203639 retry.go:31] will retry after 7.831464005s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (581.510498ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:47:48.803594 1219790 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:47:48.803845 1219790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:48.803854 1219790 out.go:358] Setting ErrFile to fd 2...
	I0414 14:47:48.803858 1219790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:47:48.804058 1219790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:47:48.804249 1219790 out.go:352] Setting JSON to false
	I0414 14:47:48.804282 1219790 mustload.go:65] Loading cluster: ha-290859
	I0414 14:47:48.804426 1219790 notify.go:220] Checking for updates...
	I0414 14:47:48.804621 1219790 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:47:48.804645 1219790 status.go:174] checking status of ha-290859 ...
	I0414 14:47:48.805102 1219790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:48.805156 1219790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:48.820689 1219790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37811
	I0414 14:47:48.821261 1219790 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:48.822000 1219790 main.go:141] libmachine: Using API Version  1
	I0414 14:47:48.822029 1219790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:48.822417 1219790 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:48.822645 1219790 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:47:48.824360 1219790 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:47:48.824385 1219790 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:48.824719 1219790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:48.824787 1219790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:48.840775 1219790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37569
	I0414 14:47:48.841360 1219790 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:48.841850 1219790 main.go:141] libmachine: Using API Version  1
	I0414 14:47:48.841875 1219790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:48.842276 1219790 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:48.842486 1219790 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:47:48.845148 1219790 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:48.845528 1219790 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:48.845557 1219790 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:48.845636 1219790 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:47:48.845933 1219790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:48.845980 1219790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:48.861650 1219790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I0414 14:47:48.862174 1219790 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:48.862605 1219790 main.go:141] libmachine: Using API Version  1
	I0414 14:47:48.862630 1219790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:48.862981 1219790 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:48.863144 1219790 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:47:48.863365 1219790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:48.863406 1219790 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:47:48.866457 1219790 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:48.866918 1219790 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:47:48.866949 1219790 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:47:48.867096 1219790 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:47:48.867293 1219790 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:47:48.867443 1219790 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:47:48.867587 1219790 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:47:48.954716 1219790 ssh_runner.go:195] Run: systemctl --version
	I0414 14:47:48.961052 1219790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:48.976619 1219790 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:48.976665 1219790 api_server.go:166] Checking apiserver status ...
	I0414 14:47:48.976702 1219790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:47:48.990669 1219790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:47:49.002321 1219790 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:49.002388 1219790 ssh_runner.go:195] Run: ls
	I0414 14:47:49.006866 1219790 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:47:49.013208 1219790 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:47:49.013236 1219790 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:47:49.013246 1219790 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:49.013280 1219790 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:47:49.013566 1219790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:49.013601 1219790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:49.029542 1219790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I0414 14:47:49.030012 1219790 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:49.030469 1219790 main.go:141] libmachine: Using API Version  1
	I0414 14:47:49.030490 1219790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:49.030826 1219790 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:49.031054 1219790 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:47:49.032648 1219790 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:47:49.032667 1219790 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:49.032979 1219790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:49.033033 1219790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:49.048922 1219790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33261
	I0414 14:47:49.049526 1219790 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:49.050018 1219790 main.go:141] libmachine: Using API Version  1
	I0414 14:47:49.050039 1219790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:49.050357 1219790 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:49.050496 1219790 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:47:49.053545 1219790 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:49.053969 1219790 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:49.054004 1219790 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:49.054171 1219790 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:47:49.054488 1219790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:49.054535 1219790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:49.070128 1219790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0414 14:47:49.070668 1219790 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:49.071292 1219790 main.go:141] libmachine: Using API Version  1
	I0414 14:47:49.071322 1219790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:49.071672 1219790 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:49.071867 1219790 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:47:49.072078 1219790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:49.072104 1219790 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:47:49.075080 1219790 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:49.075721 1219790 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:47:49.075775 1219790 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:47:49.075940 1219790 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:47:49.076120 1219790 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:47:49.076264 1219790 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:47:49.076406 1219790 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:47:49.154063 1219790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:49.168246 1219790 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:47:49.168278 1219790 api_server.go:166] Checking apiserver status ...
	I0414 14:47:49.168323 1219790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:47:49.180186 1219790 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:47:49.180209 1219790 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:47:49.180220 1219790 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:47:49.180235 1219790 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:47:49.180565 1219790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:49.180615 1219790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:49.196680 1219790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
	I0414 14:47:49.197310 1219790 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:49.197820 1219790 main.go:141] libmachine: Using API Version  1
	I0414 14:47:49.197845 1219790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:49.198145 1219790 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:49.198334 1219790 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:47:49.200262 1219790 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:47:49.200282 1219790 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:49.200621 1219790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:49.200670 1219790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:49.216147 1219790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
	I0414 14:47:49.216607 1219790 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:49.217099 1219790 main.go:141] libmachine: Using API Version  1
	I0414 14:47:49.217126 1219790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:49.217501 1219790 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:49.217699 1219790 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:47:49.220483 1219790 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:49.220902 1219790 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:49.220937 1219790 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:49.221141 1219790 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:47:49.221472 1219790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:47:49.221515 1219790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:47:49.237017 1219790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0414 14:47:49.237547 1219790 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:47:49.238035 1219790 main.go:141] libmachine: Using API Version  1
	I0414 14:47:49.238059 1219790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:47:49.238371 1219790 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:47:49.238556 1219790 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:47:49.238723 1219790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:47:49.238745 1219790 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:47:49.241522 1219790 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:49.241907 1219790 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:47:49.241927 1219790 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:47:49.242053 1219790 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:47:49.242233 1219790 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:47:49.242386 1219790 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:47:49.242523 1219790 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:47:49.322214 1219790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:47:49.335693 1219790 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0414 14:47:49.341838 1203639 retry.go:31] will retry after 11.458126452s: exit status 2
E0414 14:47:59.574916 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (577.222179ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:48:00.844361 1219891 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:48:00.844637 1219891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:48:00.844647 1219891 out.go:358] Setting ErrFile to fd 2...
	I0414 14:48:00.844651 1219891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:48:00.844843 1219891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:48:00.845013 1219891 out.go:352] Setting JSON to false
	I0414 14:48:00.845045 1219891 mustload.go:65] Loading cluster: ha-290859
	I0414 14:48:00.845182 1219891 notify.go:220] Checking for updates...
	I0414 14:48:00.845447 1219891 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:48:00.845469 1219891 status.go:174] checking status of ha-290859 ...
	I0414 14:48:00.845881 1219891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:00.845924 1219891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:00.862838 1219891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I0414 14:48:00.863416 1219891 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:00.863941 1219891 main.go:141] libmachine: Using API Version  1
	I0414 14:48:00.863960 1219891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:00.864297 1219891 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:00.864482 1219891 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:48:00.866298 1219891 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:48:00.866315 1219891 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:48:00.866611 1219891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:00.866649 1219891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:00.881874 1219891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35533
	I0414 14:48:00.882361 1219891 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:00.882919 1219891 main.go:141] libmachine: Using API Version  1
	I0414 14:48:00.882950 1219891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:00.883266 1219891 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:00.883461 1219891 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:48:00.886443 1219891 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:48:00.886848 1219891 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:48:00.886874 1219891 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:48:00.886955 1219891 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:48:00.887290 1219891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:00.887331 1219891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:00.902622 1219891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33041
	I0414 14:48:00.903062 1219891 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:00.903578 1219891 main.go:141] libmachine: Using API Version  1
	I0414 14:48:00.903607 1219891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:00.903996 1219891 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:00.904186 1219891 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:48:00.904357 1219891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:48:00.904390 1219891 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:48:00.907411 1219891 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:48:00.907819 1219891 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:48:00.907852 1219891 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:48:00.907977 1219891 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:48:00.908177 1219891 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:48:00.908335 1219891 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:48:00.908514 1219891 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:48:00.991496 1219891 ssh_runner.go:195] Run: systemctl --version
	I0414 14:48:00.997758 1219891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:48:01.012490 1219891 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:48:01.012536 1219891 api_server.go:166] Checking apiserver status ...
	I0414 14:48:01.012578 1219891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:48:01.026745 1219891 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:48:01.036570 1219891 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:48:01.036641 1219891 ssh_runner.go:195] Run: ls
	I0414 14:48:01.041774 1219891 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:48:01.048244 1219891 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:48:01.048273 1219891 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:48:01.048283 1219891 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:48:01.048299 1219891 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:48:01.048734 1219891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:01.048791 1219891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:01.064829 1219891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44907
	I0414 14:48:01.065415 1219891 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:01.065959 1219891 main.go:141] libmachine: Using API Version  1
	I0414 14:48:01.065991 1219891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:01.066439 1219891 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:01.066657 1219891 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:48:01.068212 1219891 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:48:01.068230 1219891 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:48:01.068527 1219891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:01.068589 1219891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:01.084381 1219891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0414 14:48:01.084883 1219891 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:01.085411 1219891 main.go:141] libmachine: Using API Version  1
	I0414 14:48:01.085442 1219891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:01.085797 1219891 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:01.085964 1219891 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:48:01.089105 1219891 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:48:01.089479 1219891 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:48:01.089513 1219891 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:48:01.089592 1219891 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:48:01.089883 1219891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:01.089928 1219891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:01.105715 1219891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0414 14:48:01.106254 1219891 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:01.106684 1219891 main.go:141] libmachine: Using API Version  1
	I0414 14:48:01.106705 1219891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:01.107058 1219891 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:01.107219 1219891 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:48:01.107426 1219891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:48:01.107449 1219891 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:48:01.110147 1219891 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:48:01.110512 1219891 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:48:01.110537 1219891 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:48:01.110693 1219891 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:48:01.110878 1219891 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:48:01.111045 1219891 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:48:01.111240 1219891 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:48:01.190675 1219891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:48:01.205292 1219891 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:48:01.205319 1219891 api_server.go:166] Checking apiserver status ...
	I0414 14:48:01.205351 1219891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:48:01.217160 1219891 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:48:01.217191 1219891 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:48:01.217202 1219891 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:48:01.217218 1219891 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:48:01.217585 1219891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:01.217634 1219891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:01.233567 1219891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43539
	I0414 14:48:01.234129 1219891 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:01.234696 1219891 main.go:141] libmachine: Using API Version  1
	I0414 14:48:01.234726 1219891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:01.235073 1219891 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:01.235293 1219891 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:48:01.236650 1219891 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:48:01.236670 1219891 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:48:01.236980 1219891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:01.237018 1219891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:01.252388 1219891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41751
	I0414 14:48:01.252879 1219891 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:01.253547 1219891 main.go:141] libmachine: Using API Version  1
	I0414 14:48:01.253576 1219891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:01.253963 1219891 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:01.254219 1219891 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:48:01.257450 1219891 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:48:01.257944 1219891 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:48:01.257976 1219891 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:48:01.258164 1219891 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:48:01.258548 1219891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:01.258599 1219891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:01.274204 1219891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40073
	I0414 14:48:01.274665 1219891 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:01.275143 1219891 main.go:141] libmachine: Using API Version  1
	I0414 14:48:01.275174 1219891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:01.275580 1219891 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:01.275765 1219891 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:48:01.275964 1219891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:48:01.275994 1219891 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:48:01.278881 1219891 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:48:01.279207 1219891 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:48:01.279243 1219891 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:48:01.279408 1219891 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:48:01.279640 1219891 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:48:01.279804 1219891 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:48:01.279951 1219891 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:48:01.358039 1219891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:48:01.372036 1219891 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0414 14:48:01.378910 1203639 retry.go:31] will retry after 15.677003893s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (566.148327ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-290859-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:48:17.107525 1220008 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:48:17.107780 1220008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:48:17.107790 1220008 out.go:358] Setting ErrFile to fd 2...
	I0414 14:48:17.107795 1220008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:48:17.108040 1220008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:48:17.108269 1220008 out.go:352] Setting JSON to false
	I0414 14:48:17.108311 1220008 mustload.go:65] Loading cluster: ha-290859
	I0414 14:48:17.108410 1220008 notify.go:220] Checking for updates...
	I0414 14:48:17.108780 1220008 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:48:17.108808 1220008 status.go:174] checking status of ha-290859 ...
	I0414 14:48:17.109261 1220008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:17.109316 1220008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:17.126992 1220008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39115
	I0414 14:48:17.127624 1220008 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:17.128212 1220008 main.go:141] libmachine: Using API Version  1
	I0414 14:48:17.128236 1220008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:17.128627 1220008 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:17.128826 1220008 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:48:17.130464 1220008 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:48:17.130488 1220008 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:48:17.130910 1220008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:17.130961 1220008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:17.145750 1220008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41165
	I0414 14:48:17.146119 1220008 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:17.146545 1220008 main.go:141] libmachine: Using API Version  1
	I0414 14:48:17.146567 1220008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:17.146871 1220008 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:17.147094 1220008 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:48:17.149680 1220008 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:48:17.150150 1220008 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:48:17.150191 1220008 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:48:17.150312 1220008 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:48:17.150706 1220008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:17.150754 1220008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:17.165572 1220008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43603
	I0414 14:48:17.166081 1220008 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:17.166606 1220008 main.go:141] libmachine: Using API Version  1
	I0414 14:48:17.166632 1220008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:17.166979 1220008 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:17.167162 1220008 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:48:17.167382 1220008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:48:17.167407 1220008 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:48:17.170193 1220008 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:48:17.170601 1220008 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:48:17.170632 1220008 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:48:17.170751 1220008 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:48:17.170939 1220008 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:48:17.171117 1220008 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:48:17.171295 1220008 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:48:17.254589 1220008 ssh_runner.go:195] Run: systemctl --version
	I0414 14:48:17.264249 1220008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:48:17.278503 1220008 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:48:17.278550 1220008 api_server.go:166] Checking apiserver status ...
	I0414 14:48:17.278597 1220008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:48:17.291543 1220008 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	W0414 14:48:17.300011 1220008 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:48:17.300065 1220008 ssh_runner.go:195] Run: ls
	I0414 14:48:17.304201 1220008 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:48:17.308037 1220008 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:48:17.308062 1220008 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:48:17.308075 1220008 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:48:17.308096 1220008 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:48:17.308438 1220008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:17.308493 1220008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:17.323985 1220008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38077
	I0414 14:48:17.324479 1220008 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:17.324985 1220008 main.go:141] libmachine: Using API Version  1
	I0414 14:48:17.325009 1220008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:17.325336 1220008 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:17.325541 1220008 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:48:17.327181 1220008 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:48:17.327199 1220008 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:48:17.327663 1220008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:17.327717 1220008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:17.342683 1220008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
	I0414 14:48:17.343315 1220008 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:17.343807 1220008 main.go:141] libmachine: Using API Version  1
	I0414 14:48:17.343827 1220008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:17.344197 1220008 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:17.344386 1220008 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:48:17.347159 1220008 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:48:17.347558 1220008 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:48:17.347581 1220008 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:48:17.347745 1220008 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:48:17.348049 1220008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:17.348085 1220008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:17.362624 1220008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0414 14:48:17.363041 1220008 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:17.363460 1220008 main.go:141] libmachine: Using API Version  1
	I0414 14:48:17.363484 1220008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:17.363761 1220008 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:17.363945 1220008 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:48:17.364151 1220008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:48:17.364179 1220008 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:48:17.367084 1220008 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:48:17.367479 1220008 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:43:15 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:48:17.367503 1220008 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:48:17.367667 1220008 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:48:17.367818 1220008 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:48:17.367958 1220008 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:48:17.368171 1220008 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:48:17.446739 1220008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:48:17.460155 1220008 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:48:17.460194 1220008 api_server.go:166] Checking apiserver status ...
	I0414 14:48:17.460230 1220008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:48:17.471399 1220008 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:48:17.471423 1220008 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:48:17.471438 1220008 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:48:17.471458 1220008 status.go:174] checking status of ha-290859-m03 ...
	I0414 14:48:17.471777 1220008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:17.471829 1220008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:17.487300 1220008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44429
	I0414 14:48:17.487912 1220008 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:17.488464 1220008 main.go:141] libmachine: Using API Version  1
	I0414 14:48:17.488486 1220008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:17.488845 1220008 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:17.489116 1220008 main.go:141] libmachine: (ha-290859-m03) Calling .GetState
	I0414 14:48:17.490704 1220008 status.go:371] ha-290859-m03 host status = "Running" (err=<nil>)
	I0414 14:48:17.490725 1220008 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:48:17.491037 1220008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:17.491084 1220008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:17.506234 1220008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43217
	I0414 14:48:17.506688 1220008 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:17.507145 1220008 main.go:141] libmachine: Using API Version  1
	I0414 14:48:17.507168 1220008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:17.507515 1220008 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:17.507682 1220008 main.go:141] libmachine: (ha-290859-m03) Calling .GetIP
	I0414 14:48:17.510519 1220008 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:48:17.510899 1220008 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:48:17.510940 1220008 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:48:17.511117 1220008 host.go:66] Checking if "ha-290859-m03" exists ...
	I0414 14:48:17.511473 1220008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:48:17.511518 1220008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:48:17.526456 1220008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I0414 14:48:17.526912 1220008 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:48:17.527398 1220008 main.go:141] libmachine: Using API Version  1
	I0414 14:48:17.527427 1220008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:48:17.527771 1220008 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:48:17.527951 1220008 main.go:141] libmachine: (ha-290859-m03) Calling .DriverName
	I0414 14:48:17.528144 1220008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:48:17.528174 1220008 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHHostname
	I0414 14:48:17.530540 1220008 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:48:17.530976 1220008 main.go:141] libmachine: (ha-290859-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4a:72", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:42:14 +0000 UTC Type:0 Mac:52:54:00:b7:4a:72 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-290859-m03 Clientid:01:52:54:00:b7:4a:72}
	I0414 14:48:17.531005 1220008 main.go:141] libmachine: (ha-290859-m03) DBG | domain ha-290859-m03 has defined IP address 192.168.39.112 and MAC address 52:54:00:b7:4a:72 in network mk-ha-290859
	I0414 14:48:17.531096 1220008 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHPort
	I0414 14:48:17.531291 1220008 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHKeyPath
	I0414 14:48:17.531464 1220008 main.go:141] libmachine: (ha-290859-m03) Calling .GetSSHUsername
	I0414 14:48:17.531586 1220008 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m03/id_rsa Username:docker}
	I0414 14:48:17.610225 1220008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:48:17.623918 1220008 status.go:176] ha-290859-m03 status: &{Name:ha-290859-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.115691049s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node stop m02 -v=7         | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node start m02 -v=7        | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:43 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24e6d7cfe7ea4       8c811b4aec35f       18 minutes ago      Running             busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       18 minutes ago      Running             coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       18 minutes ago      Running             coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	922f97d06563e       6e38f40d628db       18 minutes ago      Running             storage-provisioner       0                   4de376d34ee7f       storage-provisioner
	2df8ccb8d6ed9       df3849d954c98       18 minutes ago      Running             kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       18 minutes ago      Running             kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	9914f8879fc43       6ff023a402a69       18 minutes ago      Running             kube-vip                  0                   7b4e857fc4a72       kube-vip-ha-290859
	8263b35014337       b6a454c5a800d       18 minutes ago      Running             kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       18 minutes ago      Running             kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       18 minutes ago      Running             etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       18 minutes ago      Running             kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.131928214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,}"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218617705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218691310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218706805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218958691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.281907696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,} returns sandbox id \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\""
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.284050999Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.401970091Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.404464641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.406415797Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.409920833Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411266903Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.127171694s"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411378057Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.414728181Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.437197602Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.439640223Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.489937462Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:48:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:47:25 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:47:25 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:47:25 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:47:25 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    357ae105-a7f9-47b1-bf31-1c1aadedfe92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal  NodeReady                18m   kubelet          Node ha-290859 status is now: NodeReady
	
	
	Name:               ha-290859-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_14T14_42_30_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:42:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:48:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:42:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-290859-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96e9da9bd9e1490583702338b88b0c23
	  System UUID:                96e9da9b-d9e1-4905-8370-2338b88b0c23
	  Boot ID:                    b2600615-03c7-4984-8138-73f9baedc04e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8bg2x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kindnet-4jz25               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m49s
	  kube-system                 kube-proxy-sp56w            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet          Node ha-290859-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m45s                  node-controller  Node ha-290859-m03 event: Registered Node ha-290859-m03 in Controller
	  Normal  NodeReady                5m29s                  kubelet          Node ha-290859-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.058894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
	[  +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.061817] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.054755] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.092655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133260] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
	[Apr14 14:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	{"level":"info","ts":"2025-04-14T14:42:12.425594Z","caller":"traceutil/trace.go:171","msg":"trace[593749251] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1973; }","duration":"103.549581ms","start":"2025-04-14T14:42:12.322004Z","end":"2025-04-14T14:42:12.425554Z","steps":["trace[593749251] 'read index received'  (duration: 102.720139ms)","trace[593749251] 'applied index is now lower than readState.Index'  (duration: 828.805µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:42:12.426144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.759593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:42:12.426196Z","caller":"traceutil/trace.go:171","msg":"trace[257637869] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1805; }","duration":"104.23976ms","start":"2025-04-14T14:42:12.321948Z","end":"2025-04-14T14:42:12.426188Z","steps":["trace[257637869] 'agreement among raft nodes before linearized reading'  (duration: 103.769974ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:42:12.425685Z","caller":"traceutil/trace.go:171","msg":"trace[874985590] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"128.996586ms","start":"2025-04-14T14:42:12.296675Z","end":"2025-04-14T14:42:12.425672Z","steps":["trace[874985590] 'process raft request'  (duration: 128.079961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:42:29.811595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.362023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932452365827166964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:25989634b465d2f3>","response":"size:42"}
	{"level":"info","ts":"2025-04-14T14:44:20.976766Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1495}
	{"level":"info","ts":"2025-04-14T14:44:20.980966Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1495,"took":"3.550898ms","hash":2769383186,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2031616,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-04-14T14:44:20.981013Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2769383186,"revision":1495,"compact-revision":955}
	
	
	==> kernel <==
	 14:48:18 up 19 min,  0 users,  load average: 0.09, 0.12, 0.09
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:47:14.502438       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:47:24.506455       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:47:24.506598       1 main.go:301] handling current node
	I0414 14:47:24.506633       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:47:24.506642       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:47:34.500917       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:47:34.501000       1 main.go:301] handling current node
	I0414 14:47:34.501038       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:47:34.501044       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:47:44.501996       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:47:44.502048       1 main.go:301] handling current node
	I0414 14:47:44.502077       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:47:44.502088       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:47:54.500375       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:47:54.500405       1 main.go:301] handling current node
	I0414 14:47:54.500419       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:47:54.500423       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:48:04.500374       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:48:04.500422       1 main.go:301] handling current node
	I0414 14:48:04.500440       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:48:04.500446       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:48:14.504752       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:48:14.505066       1 main.go:301] handling current node
	I0414 14:48:14.505143       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:48:14.505277       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:42:20.033463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:29.935163       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-290859-m03\" does not exist"
	I0414 14:42:29.948852       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-290859-m03" podCIDRs=["10.244.1.0/24"]
	I0414 14:42:29.949152       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.949831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.958386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="234.248µs"
	I0414 14:42:29.963750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.969981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.002µs"
	I0414 14:42:30.275380       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:30.614411       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:33.964410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-290859-m03"
	I0414 14:42:34.046665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:39.961881       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.191468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-290859-m03"
	I0414 14:42:49.192361       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.201252       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.216690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.679µs"
	I0414 14:42:49.217122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.948µs"
	I0414 14:42:49.230018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.053µs"
	I0414 14:42:52.664944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.387962ms"
	I0414 14:42:52.665652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.546µs"
	I0414 14:42:53.979890       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:43:00.010906       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:46:33.503243       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:47:25.635375       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:43:25 ha-290859 kubelet[1300]: E0414 14:43:25.692316    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:43:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:43:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:43:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:43:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:44:25 ha-290859 kubelet[1300]: E0414 14:44:25.693018    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:44:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:44:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:44:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:44:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:45:25 ha-290859 kubelet[1300]: E0414 14:45:25.692785    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:45:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:45:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:45:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:45:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:46:25 ha-290859 kubelet[1300]: E0414 14:46:25.693088    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:46:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:46:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:46:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:46:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:47:25 ha-290859 kubelet[1300]: E0414 14:47:25.692664    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:47:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:47:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:47:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:47:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  7m53s (x3 over 18m)    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  5m41s (x2 over 5m50s)  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  5m20s (x2 over 5m30s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (314.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-290859" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-290859\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-290859\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disab
leDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.2\",\"ClusterName\":\"ha-290859\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"containerd\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.110\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntim
e\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.111\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.112\",\"Port\":0,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod
-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\"
:false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-290859" in json of 'profile list' to have "HAppy" status but have "OK" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-290859\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-290859\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null
,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.2\",\"ClusterName\":\"ha-290859\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"containerd\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.110\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"Contain
erRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.111\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.112\",\"Port\":0,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":fal
se,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableM
etrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.096846094s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node stop m02 -v=7         | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node start m02 -v=7        | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:43 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:28:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:28:44.853283 1213155 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:44.853383 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853391 1213155 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:44.853395 1213155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:44.853589 1213155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:44.854173 1213155 out.go:352] Setting JSON to false
	I0414 14:28:44.855127 1213155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22268,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:44.855241 1213155 start.go:139] virtualization: kvm guest
	I0414 14:28:44.857434 1213155 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:44.858763 1213155 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:44.858802 1213155 notify.go:220] Checking for updates...
	I0414 14:28:44.861113 1213155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:44.862568 1213155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:44.864291 1213155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:44.865558 1213155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:44.866690 1213155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:44.867994 1213155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:44.903880 1213155 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:28:44.904972 1213155 start.go:297] selected driver: kvm2
	I0414 14:28:44.904990 1213155 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:28:44.905002 1213155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:44.905693 1213155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.905760 1213155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:28:44.921165 1213155 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:28:44.921211 1213155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:28:44.921449 1213155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:28:44.921483 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:28:44.921521 1213155 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0414 14:28:44.921528 1213155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 14:28:44.921581 1213155 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:44.921681 1213155 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:28:44.923479 1213155 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:28:44.924489 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:28:44.924534 1213155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:28:44.924545 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:28:44.924630 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:28:44.924642 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:28:44.925004 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:28:44.925036 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json: {Name:mk9cf46898e9311ef305249e5d7a46d116958366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:28:44.925215 1213155 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:28:44.925249 1213155 start.go:364] duration metric: took 19.936µs to acquireMachinesLock for "ha-290859"
	I0414 14:28:44.925270 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:28:44.925333 1213155 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:28:44.926873 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:28:44.927025 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:44.927081 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:44.941913 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0414 14:28:44.942352 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:44.942833 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:28:44.942851 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:44.943193 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:44.943375 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:28:44.943526 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:28:44.943664 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:28:44.943687 1213155 client.go:168] LocalClient.Create starting
	I0414 14:28:44.943713 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:28:44.943749 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943766 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943825 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:28:44.943844 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:28:44.943857 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:28:44.943880 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:28:44.943888 1213155 main.go:141] libmachine: (ha-290859) Calling .PreCreateCheck
	I0414 14:28:44.944202 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:28:44.944583 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:28:44.944596 1213155 main.go:141] libmachine: (ha-290859) Calling .Create
	I0414 14:28:44.944741 1213155 main.go:141] libmachine: (ha-290859) creating KVM machine...
	I0414 14:28:44.944764 1213155 main.go:141] libmachine: (ha-290859) creating network...
	I0414 14:28:44.945897 1213155 main.go:141] libmachine: (ha-290859) DBG | found existing default KVM network
	I0414 14:28:44.946500 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:44.946375 1213178 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0414 14:28:44.946525 1213155 main.go:141] libmachine: (ha-290859) DBG | created network xml: 
	I0414 14:28:44.946536 1213155 main.go:141] libmachine: (ha-290859) DBG | <network>
	I0414 14:28:44.946547 1213155 main.go:141] libmachine: (ha-290859) DBG |   <name>mk-ha-290859</name>
	I0414 14:28:44.946556 1213155 main.go:141] libmachine: (ha-290859) DBG |   <dns enable='no'/>
	I0414 14:28:44.946567 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946578 1213155 main.go:141] libmachine: (ha-290859) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:28:44.946589 1213155 main.go:141] libmachine: (ha-290859) DBG |     <dhcp>
	I0414 14:28:44.946597 1213155 main.go:141] libmachine: (ha-290859) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:28:44.946611 1213155 main.go:141] libmachine: (ha-290859) DBG |     </dhcp>
	I0414 14:28:44.946635 1213155 main.go:141] libmachine: (ha-290859) DBG |   </ip>
	I0414 14:28:44.946659 1213155 main.go:141] libmachine: (ha-290859) DBG |   
	I0414 14:28:44.946681 1213155 main.go:141] libmachine: (ha-290859) DBG | </network>
	I0414 14:28:44.946692 1213155 main.go:141] libmachine: (ha-290859) DBG | 
	I0414 14:28:44.951588 1213155 main.go:141] libmachine: (ha-290859) DBG | trying to create private KVM network mk-ha-290859 192.168.39.0/24...
	I0414 14:28:45.019463 1213155 main.go:141] libmachine: (ha-290859) DBG | private KVM network mk-ha-290859 192.168.39.0/24 created
	I0414 14:28:45.019524 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.019424 1213178 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.019537 1213155 main.go:141] libmachine: (ha-290859) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.019577 1213155 main.go:141] libmachine: (ha-290859) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:28:45.019612 1213155 main.go:141] libmachine: (ha-290859) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:28:45.329551 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.329430 1213178 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa...
	I0414 14:28:45.651739 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651571 1213178 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk...
	I0414 14:28:45.651774 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing magic tar header
	I0414 14:28:45.651813 1213155 main.go:141] libmachine: (ha-290859) DBG | Writing SSH key tar header
	I0414 14:28:45.651828 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:45.651709 1213178 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 ...
	I0414 14:28:45.651838 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859
	I0414 14:28:45.651849 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:28:45.651870 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:45.651877 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:28:45.651888 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859 (perms=drwx------)
	I0414 14:28:45.651901 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:28:45.651912 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:28:45.651969 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:28:45.651997 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home/jenkins
	I0414 14:28:45.652007 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:28:45.652022 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:28:45.652031 1213155 main.go:141] libmachine: (ha-290859) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:28:45.652040 1213155 main.go:141] libmachine: (ha-290859) DBG | checking permissions on dir: /home
	I0414 14:28:45.652050 1213155 main.go:141] libmachine: (ha-290859) DBG | skipping /home - not owner
	I0414 14:28:45.652117 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:45.653155 1213155 main.go:141] libmachine: (ha-290859) define libvirt domain using xml: 
	I0414 14:28:45.653173 1213155 main.go:141] libmachine: (ha-290859) <domain type='kvm'>
	I0414 14:28:45.653182 1213155 main.go:141] libmachine: (ha-290859)   <name>ha-290859</name>
	I0414 14:28:45.653197 1213155 main.go:141] libmachine: (ha-290859)   <memory unit='MiB'>2200</memory>
	I0414 14:28:45.653206 1213155 main.go:141] libmachine: (ha-290859)   <vcpu>2</vcpu>
	I0414 14:28:45.653212 1213155 main.go:141] libmachine: (ha-290859)   <features>
	I0414 14:28:45.653231 1213155 main.go:141] libmachine: (ha-290859)     <acpi/>
	I0414 14:28:45.653240 1213155 main.go:141] libmachine: (ha-290859)     <apic/>
	I0414 14:28:45.653258 1213155 main.go:141] libmachine: (ha-290859)     <pae/>
	I0414 14:28:45.653267 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653272 1213155 main.go:141] libmachine: (ha-290859)   </features>
	I0414 14:28:45.653277 1213155 main.go:141] libmachine: (ha-290859)   <cpu mode='host-passthrough'>
	I0414 14:28:45.653281 1213155 main.go:141] libmachine: (ha-290859)   
	I0414 14:28:45.653287 1213155 main.go:141] libmachine: (ha-290859)   </cpu>
	I0414 14:28:45.653317 1213155 main.go:141] libmachine: (ha-290859)   <os>
	I0414 14:28:45.653340 1213155 main.go:141] libmachine: (ha-290859)     <type>hvm</type>
	I0414 14:28:45.653351 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='cdrom'/>
	I0414 14:28:45.653362 1213155 main.go:141] libmachine: (ha-290859)     <boot dev='hd'/>
	I0414 14:28:45.653372 1213155 main.go:141] libmachine: (ha-290859)     <bootmenu enable='no'/>
	I0414 14:28:45.653379 1213155 main.go:141] libmachine: (ha-290859)   </os>
	I0414 14:28:45.653387 1213155 main.go:141] libmachine: (ha-290859)   <devices>
	I0414 14:28:45.653396 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='cdrom'>
	I0414 14:28:45.653409 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/boot2docker.iso'/>
	I0414 14:28:45.653425 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hdc' bus='scsi'/>
	I0414 14:28:45.653434 1213155 main.go:141] libmachine: (ha-290859)       <readonly/>
	I0414 14:28:45.653441 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653450 1213155 main.go:141] libmachine: (ha-290859)     <disk type='file' device='disk'>
	I0414 14:28:45.653459 1213155 main.go:141] libmachine: (ha-290859)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:28:45.653472 1213155 main.go:141] libmachine: (ha-290859)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/ha-290859.rawdisk'/>
	I0414 14:28:45.653484 1213155 main.go:141] libmachine: (ha-290859)       <target dev='hda' bus='virtio'/>
	I0414 14:28:45.653515 1213155 main.go:141] libmachine: (ha-290859)     </disk>
	I0414 14:28:45.653535 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653542 1213155 main.go:141] libmachine: (ha-290859)       <source network='mk-ha-290859'/>
	I0414 14:28:45.653551 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653571 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653583 1213155 main.go:141] libmachine: (ha-290859)     <interface type='network'>
	I0414 14:28:45.653600 1213155 main.go:141] libmachine: (ha-290859)       <source network='default'/>
	I0414 14:28:45.653612 1213155 main.go:141] libmachine: (ha-290859)       <model type='virtio'/>
	I0414 14:28:45.653620 1213155 main.go:141] libmachine: (ha-290859)     </interface>
	I0414 14:28:45.653629 1213155 main.go:141] libmachine: (ha-290859)     <serial type='pty'>
	I0414 14:28:45.653637 1213155 main.go:141] libmachine: (ha-290859)       <target port='0'/>
	I0414 14:28:45.653643 1213155 main.go:141] libmachine: (ha-290859)     </serial>
	I0414 14:28:45.653650 1213155 main.go:141] libmachine: (ha-290859)     <console type='pty'>
	I0414 14:28:45.653666 1213155 main.go:141] libmachine: (ha-290859)       <target type='serial' port='0'/>
	I0414 14:28:45.653677 1213155 main.go:141] libmachine: (ha-290859)     </console>
	I0414 14:28:45.653688 1213155 main.go:141] libmachine: (ha-290859)     <rng model='virtio'>
	I0414 14:28:45.653706 1213155 main.go:141] libmachine: (ha-290859)       <backend model='random'>/dev/random</backend>
	I0414 14:28:45.653722 1213155 main.go:141] libmachine: (ha-290859)     </rng>
	I0414 14:28:45.653733 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653742 1213155 main.go:141] libmachine: (ha-290859)     
	I0414 14:28:45.653750 1213155 main.go:141] libmachine: (ha-290859)   </devices>
	I0414 14:28:45.653759 1213155 main.go:141] libmachine: (ha-290859) </domain>
	I0414 14:28:45.653770 1213155 main.go:141] libmachine: (ha-290859) 
	I0414 14:28:45.658722 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:59:bb:2c in network default
	I0414 14:28:45.659333 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:45.659353 1213155 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:28:45.659378 1213155 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:28:45.660118 1213155 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:28:45.660455 1213155 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:28:45.660871 1213155 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:28:45.661572 1213155 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:28:46.865636 1213155 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:28:46.866384 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:46.866766 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:46.866798 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:46.866746 1213178 retry.go:31] will retry after 192.973653ms: waiting for domain to come up
	I0414 14:28:47.061336 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.061771 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.061833 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.061746 1213178 retry.go:31] will retry after 359.567223ms: waiting for domain to come up
	I0414 14:28:47.423487 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.423982 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.424016 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.423949 1213178 retry.go:31] will retry after 421.939914ms: waiting for domain to come up
	I0414 14:28:47.847747 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:47.848233 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:47.848285 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:47.848207 1213178 retry.go:31] will retry after 530.391474ms: waiting for domain to come up
	I0414 14:28:48.380081 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:48.380580 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:48.380623 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:48.380551 1213178 retry.go:31] will retry after 642.117854ms: waiting for domain to come up
	I0414 14:28:49.024104 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.024507 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.024543 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.024472 1213178 retry.go:31] will retry after 676.607867ms: waiting for domain to come up
	I0414 14:28:49.702625 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:49.702971 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:49.702999 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:49.702940 1213178 retry.go:31] will retry after 827.403569ms: waiting for domain to come up
	I0414 14:28:50.531673 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:50.532146 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:50.532168 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:50.532111 1213178 retry.go:31] will retry after 1.096062201s: waiting for domain to come up
	I0414 14:28:51.630700 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:51.631223 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:51.631271 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:51.631180 1213178 retry.go:31] will retry after 1.695737217s: waiting for domain to come up
	I0414 14:28:53.328391 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:53.328936 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:53.328976 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:53.328895 1213178 retry.go:31] will retry after 1.847433296s: waiting for domain to come up
	I0414 14:28:55.178635 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:55.179196 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:55.179222 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:55.179116 1213178 retry.go:31] will retry after 1.882043118s: waiting for domain to come up
	I0414 14:28:57.063275 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:57.063819 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:57.063839 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:57.063785 1213178 retry.go:31] will retry after 2.565601812s: waiting for domain to come up
	I0414 14:28:59.632546 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:28:59.633076 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:28:59.633121 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:28:59.633056 1213178 retry.go:31] will retry after 3.119155423s: waiting for domain to come up
	I0414 14:29:02.755950 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:02.756520 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:29:02.756617 1213155 main.go:141] libmachine: (ha-290859) DBG | I0414 14:29:02.756481 1213178 retry.go:31] will retry after 3.570724653s: waiting for domain to come up
	I0414 14:29:06.329744 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330242 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.330260 1213155 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:29:06.330269 1213155 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:29:06.330641 1213155 main.go:141] libmachine: (ha-290859) DBG | unable to find host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859
	I0414 14:29:06.406487 1213155 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:29:06.406521 1213155 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:29:06.406533 1213155 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:29:06.409873 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410210 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.410253 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.410314 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:29:06.410387 1213155 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:29:06.410418 1213155 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:06.410439 1213155 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:29:06.410452 1213155 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:29:06.535060 1213155 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:06.535328 1213155 main.go:141] libmachine: (ha-290859) KVM machine creation complete
	I0414 14:29:06.535695 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:06.536306 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536530 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:06.536742 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:06.536766 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:06.538276 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:06.538292 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:06.538297 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:06.538303 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.540789 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541096 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.541142 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.541273 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.541468 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541620 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.541797 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.541943 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.542216 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.542236 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:06.650464 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:06.650493 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:06.650505 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.653952 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654723 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.654757 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.654985 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.655204 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655393 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.655541 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.655742 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.655964 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.655983 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:06.763752 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:06.763848 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:06.763862 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:06.763874 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764294 1213155 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:29:06.764326 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:06.764523 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.767077 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767516 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.767542 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.767639 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.767813 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.767978 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.768165 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.768341 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.768572 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.768583 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:29:06.889296 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:29:06.889330 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:06.892172 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892600 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:06.892626 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:06.892865 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:06.893083 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893277 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:06.893435 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:06.893648 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:06.893858 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:06.893874 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:07.007141 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:07.007184 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:07.007203 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:07.007215 1213155 provision.go:84] configureAuth start
	I0414 14:29:07.007224 1213155 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:29:07.007528 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.010400 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010788 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.010824 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.010979 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.012963 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.013387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.013515 1213155 provision.go:143] copyHostCerts
	I0414 14:29:07.013548 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013586 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:07.013609 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:07.013691 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:07.013790 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013815 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:07.013825 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:07.013863 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:07.013930 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013953 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:07.013962 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:07.013998 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:07.014066 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:29:07.096347 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:07.096413 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:07.096445 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.099387 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099720 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.099754 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.099919 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.100133 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.100320 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.100477 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.185597 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:07.185665 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:07.208427 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:07.208514 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:29:07.230077 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:07.230146 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:07.252057 1213155 provision.go:87] duration metric: took 244.822415ms to configureAuth
	I0414 14:29:07.252098 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:07.252381 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:07.252417 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:07.252428 1213155 main.go:141] libmachine: (ha-290859) Calling .GetURL
	I0414 14:29:07.253526 1213155 main.go:141] libmachine: (ha-290859) DBG | using libvirt version 6000000
	I0414 14:29:07.255629 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.255987 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.256013 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.256164 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:07.256179 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:07.256186 1213155 client.go:171] duration metric: took 22.312490028s to LocalClient.Create
	I0414 14:29:07.256207 1213155 start.go:167] duration metric: took 22.312544194s to libmachine.API.Create "ha-290859"
	I0414 14:29:07.256216 1213155 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:29:07.256225 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:07.256242 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.256494 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:07.256518 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.258683 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259095 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.259129 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.259274 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.259443 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.259598 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.259770 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.341222 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:07.344960 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:07.344983 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:07.345036 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:07.345105 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:07.345117 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:07.345204 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:07.353618 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:07.375295 1213155 start.go:296] duration metric: took 119.0622ms for postStartSetup
	I0414 14:29:07.375348 1213155 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:29:07.376009 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.378738 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379089 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.379127 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.379360 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:07.379552 1213155 start.go:128] duration metric: took 22.454193164s to createHost
	I0414 14:29:07.379576 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.381911 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382271 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.382299 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.382412 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.382636 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382763 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.382918 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.383103 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:07.383383 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:29:07.383397 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:07.491798 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640947.466359070
	
	I0414 14:29:07.491832 1213155 fix.go:216] guest clock: 1744640947.466359070
	I0414 14:29:07.491843 1213155 fix.go:229] Guest: 2025-04-14 14:29:07.46635907 +0000 UTC Remote: 2025-04-14 14:29:07.37956282 +0000 UTC m=+22.563725092 (delta=86.79625ms)
	I0414 14:29:07.491874 1213155 fix.go:200] guest clock delta is within tolerance: 86.79625ms
	I0414 14:29:07.491882 1213155 start.go:83] releasing machines lock for "ha-290859", held for 22.566621352s
	I0414 14:29:07.491951 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.492257 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:07.494784 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495186 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.495213 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.495369 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.495891 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496108 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:07.496210 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:07.496270 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.496330 1213155 ssh_runner.go:195] Run: cat /version.json
	I0414 14:29:07.496359 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:07.499187 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499556 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.499585 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499605 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.499687 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.499909 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500059 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:07.500076 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500080 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:07.500225 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.500343 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:07.500495 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:07.500676 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:07.500868 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:07.610155 1213155 ssh_runner.go:195] Run: systemctl --version
	I0414 14:29:07.615832 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:29:07.620841 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:07.620918 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:07.635201 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:07.635238 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:07.635339 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:07.664507 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:07.677886 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:07.677968 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:07.691126 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:07.704327 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:07.821296 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:07.981478 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:07.981570 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:07.995082 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:08.007593 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:08.118166 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:08.233009 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:08.245943 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:08.262966 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:08.272218 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:08.281344 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:08.281397 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:08.290468 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.299561 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:08.308656 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:08.317719 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:08.327133 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:08.336264 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:08.345279 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:08.354386 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:08.362578 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:08.362625 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:08.374609 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:08.383117 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:08.490311 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:08.517222 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:08.517297 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:08.522141 1213155 retry.go:31] will retry after 1.326617724s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:09.849693 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:09.855377 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:09.855452 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:09.859356 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:09.901676 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:09.901749 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.933729 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:09.957147 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:09.958358 1213155 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:29:09.961074 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961436 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:09.961465 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:09.961654 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:09.965618 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:09.977763 1213155 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:29:09.977920 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:09.977985 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:10.007423 1213155 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:29:10.007567 1213155 ssh_runner.go:195] Run: which lz4
	I0414 14:29:10.011302 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0414 14:29:10.011399 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:29:10.015201 1213155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:29:10.015237 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398567491 bytes)
	I0414 14:29:11.177802 1213155 containerd.go:563] duration metric: took 1.166430977s to copy over tarball
	I0414 14:29:11.177883 1213155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:29:13.222422 1213155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044497794s)
	I0414 14:29:13.222461 1213155 containerd.go:570] duration metric: took 2.04462504s to extract the tarball
	I0414 14:29:13.222471 1213155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:29:13.258541 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.368119 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:13.394813 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.428402 1213155 retry.go:31] will retry after 248.442754ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:29:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0414 14:29:13.677983 1213155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:29:13.709958 1213155 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:29:13.709986 1213155 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:29:13.709997 1213155 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:29:13.710119 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:13.710205 1213155 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:29:13.747854 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:13.747881 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:13.747891 1213155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:29:13.747912 1213155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:29:13.748064 1213155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:29:13.748098 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:13.748144 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:13.764006 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:13.764157 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:13.764258 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:13.773742 1213155 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:29:13.773825 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:29:13.782879 1213155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:29:13.798384 1213155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:29:13.813614 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:29:13.828571 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0414 14:29:13.844489 1213155 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:29:13.848595 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:13.861109 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:13.970530 1213155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:29:13.987774 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:29:13.987806 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:13.987826 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:13.988007 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:13.988081 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:13.988097 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:13.988180 1213155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:13.988200 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt with IP's: []
	I0414 14:29:14.112386 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt ...
	I0414 14:29:14.112419 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt: {Name:mkaa12fb6551a5751b7fccd564d65a45c41d9fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112582 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key ...
	I0414 14:29:14.112593 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key: {Name:mk289f4dd0a4fd9031dc4ffc7198a0cf95bd5550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.112674 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037
	I0414 14:29:14.112690 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I0414 14:29:14.362652 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 ...
	I0414 14:29:14.362686 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037: {Name:mkb37a2918627d85c90b385a1878c8973ae4ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362861 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 ...
	I0414 14:29:14.362875 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037: {Name:mk9be12aff468559ae8511cb5c354c2cb0f19d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.362947 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:14.363058 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.7a43f037 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:14.363124 1213155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:14.363139 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt with IP's: []
	I0414 14:29:14.734988 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt ...
	I0414 14:29:14.735020 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt: {Name:mkd4197f76084714cf4c93b86f69c9de5e486dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735175 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key ...
	I0414 14:29:14.735185 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key: {Name:mkafd73813de8b0bb698e460f51557bc241d5b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:14.735249 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:14.735287 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:14.735300 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:14.735312 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:14.735324 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:14.735336 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:14.735348 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:14.735362 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:14.735413 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:14.735450 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:14.735459 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:14.735483 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:14.735504 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:14.735524 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:14.735559 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:14.735585 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:14.735598 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:14.735609 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:14.736193 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:14.767094 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:14.800218 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:14.821856 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:14.844537 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0414 14:29:14.866333 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:14.888112 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:14.916382 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:14.938747 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:14.961044 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:14.982817 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:15.004432 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:29:15.020381 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:15.026049 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:15.036472 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040722 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.040772 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:15.046327 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:15.056866 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:15.067689 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071944 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.071988 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:15.077553 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:15.088088 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:15.098760 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103102 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.103157 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:15.108670 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:15.119187 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:15.123052 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:15.123124 1213155 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:29:15.123226 1213155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:29:15.123302 1213155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:29:15.161985 1213155 cri.go:89] found id: ""
	I0414 14:29:15.162066 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:29:15.171810 1213155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:29:15.180816 1213155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:29:15.189781 1213155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:29:15.189798 1213155 kubeadm.go:157] found existing configuration files:
	
	I0414 14:29:15.189837 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:29:15.198461 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:29:15.198520 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:29:15.207495 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:29:15.216131 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:29:15.216195 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:29:15.224923 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.233259 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:29:15.233331 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:29:15.241811 1213155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:29:15.250678 1213155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:29:15.250735 1213155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:29:15.260028 1213155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:29:15.480841 1213155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:29:26.375395 1213155 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:29:26.375454 1213155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:29:26.375539 1213155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:29:26.375638 1213155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:29:26.375756 1213155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:29:26.375859 1213155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:29:26.377483 1213155 out.go:235]   - Generating certificates and keys ...
	I0414 14:29:26.377576 1213155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:29:26.377649 1213155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:29:26.377746 1213155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:29:26.377814 1213155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:29:26.377894 1213155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:29:26.377993 1213155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:29:26.378062 1213155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:29:26.378201 1213155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378273 1213155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:29:26.378435 1213155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-290859 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0414 14:29:26.378525 1213155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:29:26.378617 1213155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:29:26.378679 1213155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:29:26.378756 1213155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:29:26.378826 1213155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:29:26.378905 1213155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:29:26.378987 1213155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:29:26.379078 1213155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:29:26.379147 1213155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:29:26.379232 1213155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:29:26.379336 1213155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:29:26.381520 1213155 out.go:235]   - Booting up control plane ...
	I0414 14:29:26.381636 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:29:26.381716 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:29:26.381797 1213155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:29:26.381942 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:29:26.382066 1213155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:29:26.382127 1213155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:29:26.382279 1213155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:29:26.382430 1213155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:29:26.382522 1213155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.073677ms
	I0414 14:29:26.382613 1213155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:29:26.382699 1213155 kubeadm.go:310] [api-check] The API server is healthy after 6.046564753s
	I0414 14:29:26.382824 1213155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:29:26.382965 1213155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:29:26.383055 1213155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:29:26.383232 1213155 kubeadm.go:310] [mark-control-plane] Marking the node ha-290859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:29:26.383336 1213155 kubeadm.go:310] [bootstrap-token] Using token: vqb1fe.jxjhh2el8g0wstxf
	I0414 14:29:26.384515 1213155 out.go:235]   - Configuring RBAC rules ...
	I0414 14:29:26.384631 1213155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:29:26.384713 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:29:26.384863 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:29:26.384975 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:29:26.385071 1213155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:29:26.385151 1213155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:29:26.385262 1213155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:29:26.385326 1213155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:29:26.385400 1213155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:29:26.385416 1213155 kubeadm.go:310] 
	I0414 14:29:26.385469 1213155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:29:26.385475 1213155 kubeadm.go:310] 
	I0414 14:29:26.385551 1213155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:29:26.385557 1213155 kubeadm.go:310] 
	I0414 14:29:26.385578 1213155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:29:26.385628 1213155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:29:26.385686 1213155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:29:26.385693 1213155 kubeadm.go:310] 
	I0414 14:29:26.385743 1213155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:29:26.385752 1213155 kubeadm.go:310] 
	I0414 14:29:26.385800 1213155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:29:26.385806 1213155 kubeadm.go:310] 
	I0414 14:29:26.385852 1213155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:29:26.385921 1213155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:29:26.385993 1213155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:29:26.385999 1213155 kubeadm.go:310] 
	I0414 14:29:26.386068 1213155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:29:26.386137 1213155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:29:26.386143 1213155 kubeadm.go:310] 
	I0414 14:29:26.386219 1213155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386324 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b \
	I0414 14:29:26.386357 1213155 kubeadm.go:310] 	--control-plane 
	I0414 14:29:26.386367 1213155 kubeadm.go:310] 
	I0414 14:29:26.386468 1213155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:29:26.386481 1213155 kubeadm.go:310] 
	I0414 14:29:26.386583 1213155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqb1fe.jxjhh2el8g0wstxf \
	I0414 14:29:26.386727 1213155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1bc537cee1b1ab5982921331b936a1839b1da6b0963279993bdeae11071854b 
	I0414 14:29:26.386755 1213155 cni.go:84] Creating CNI manager for ""
	I0414 14:29:26.386764 1213155 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0414 14:29:26.388208 1213155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 14:29:26.389242 1213155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 14:29:26.394753 1213155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 14:29:26.394774 1213155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 14:29:26.412210 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 14:29:26.820060 1213155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:29:26.820136 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:26.820188 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-290859 minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=ha-290859 minikube.k8s.io/primary=true
	I0414 14:29:27.135153 1213155 ops.go:34] apiserver oom_adj: -16
	I0414 14:29:27.135367 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:27.635449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.135449 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:28.636235 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.136309 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.636026 1213155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:29:29.742992 1213155 kubeadm.go:1113] duration metric: took 2.922923817s to wait for elevateKubeSystemPrivileges
	I0414 14:29:29.743045 1213155 kubeadm.go:394] duration metric: took 14.619926947s to StartCluster
	I0414 14:29:29.743074 1213155 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.743194 1213155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.744197 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:29.744491 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:29:29.744502 1213155 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:29.744531 1213155 start.go:241] waiting for startup goroutines ...
	I0414 14:29:29.744555 1213155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:29:29.744638 1213155 addons.go:69] Setting storage-provisioner=true in profile "ha-290859"
	I0414 14:29:29.744667 1213155 addons.go:238] Setting addon storage-provisioner=true in "ha-290859"
	I0414 14:29:29.744674 1213155 addons.go:69] Setting default-storageclass=true in profile "ha-290859"
	I0414 14:29:29.744699 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.744707 1213155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-290859"
	I0414 14:29:29.744811 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:29.745181 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745244 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.745183 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.745351 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.761398 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 14:29:29.761447 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0414 14:29:29.761914 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762048 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.762457 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762483 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762590 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.762615 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.762878 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.762995 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.763052 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.763589 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.763641 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.765711 1213155 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:29:29.765898 1213155 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:29:29.766513 1213155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:29:29.766536 1213155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:29:29.766543 1213155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:29:29.766547 1213155 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:29:29.766549 1213155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:29:29.766958 1213155 addons.go:238] Setting addon default-storageclass=true in "ha-290859"
	I0414 14:29:29.767009 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:29.767411 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.767464 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.779638 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0414 14:29:29.780179 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.780847 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.780887 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.781279 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.781512 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.783372 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.783403 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I0414 14:29:29.783908 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.784349 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.784370 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.784677 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.785084 1213155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:29:29.785313 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:29.785366 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:29.786178 1213155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.786200 1213155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:29:29.786221 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.789923 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790430 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.790464 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.790637 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.790795 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.790922 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.791099 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.802732 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0414 14:29:29.803356 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:29.803862 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:29.803890 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:29.804276 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:29.804490 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:29.806170 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:29.806431 1213155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:29.806453 1213155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:29:29.806472 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:29.808998 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809401 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:29.809433 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:29.809569 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:29.809729 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:29.809892 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:29.810022 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:29.896163 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:29:29.925192 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:29:29.976032 1213155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:29:30.538988 1213155 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:29:30.715801 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.715837 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.715853 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716172 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716195 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716206 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716213 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716280 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716311 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716327 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716336 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.716346 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.716567 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716583 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716597 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.716566 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.716613 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.716759 1213155 round_trippers.go:470] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0414 14:29:30.716773 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.716785 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.716791 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730413 1213155 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0414 14:29:30.730637 1213155 round_trippers.go:470] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0414 14:29:30.730648 1213155 round_trippers.go:476] Request Headers:
	I0414 14:29:30.730655 1213155 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:29:30.730659 1213155 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0414 14:29:30.730662 1213155 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:29:30.734349 1213155 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0414 14:29:30.734498 1213155 main.go:141] libmachine: Making call to close driver server
	I0414 14:29:30.734513 1213155 main.go:141] libmachine: (ha-290859) Calling .Close
	I0414 14:29:30.734892 1213155 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:29:30.734913 1213155 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:29:30.734944 1213155 main.go:141] libmachine: (ha-290859) DBG | Closing plugin on server side
	I0414 14:29:30.736606 1213155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 14:29:30.738276 1213155 addons.go:514] duration metric: took 993.723048ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 14:29:30.738323 1213155 start.go:246] waiting for cluster config update ...
	I0414 14:29:30.738339 1213155 start.go:255] writing updated cluster config ...
	I0414 14:29:30.739993 1213155 out.go:201] 
	I0414 14:29:30.741235 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:30.741303 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.742718 1213155 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:29:30.743745 1213155 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:29:30.743770 1213155 cache.go:56] Caching tarball of preloaded images
	I0414 14:29:30.743876 1213155 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:29:30.743890 1213155 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:29:30.743970 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:30.744172 1213155 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:29:30.744229 1213155 start.go:364] duration metric: took 28.185µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:29:30.744249 1213155 start.go:93] Provisioning new machine with config: &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:h
a-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:29:30.744334 1213155 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0414 14:29:30.745838 1213155 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 14:29:30.745923 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:30.745962 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:30.761449 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0414 14:29:30.761938 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:30.762474 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:30.762500 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:30.762925 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:30.763197 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:30.763401 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:30.763637 1213155 start.go:159] libmachine.API.Create for "ha-290859" (driver="kvm2")
	I0414 14:29:30.763675 1213155 client.go:168] LocalClient.Create starting
	I0414 14:29:30.763717 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem
	I0414 14:29:30.763761 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763783 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763861 1213155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem
	I0414 14:29:30.763890 1213155 main.go:141] libmachine: Decoding PEM data...
	I0414 14:29:30.763907 1213155 main.go:141] libmachine: Parsing certificate...
	I0414 14:29:30.763954 1213155 main.go:141] libmachine: Running pre-create checks...
	I0414 14:29:30.763968 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .PreCreateCheck
	I0414 14:29:30.764183 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:30.764607 1213155 main.go:141] libmachine: Creating machine...
	I0414 14:29:30.764633 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .Create
	I0414 14:29:30.764796 1213155 main.go:141] libmachine: (ha-290859-m02) creating KVM machine...
	I0414 14:29:30.764820 1213155 main.go:141] libmachine: (ha-290859-m02) creating network...
	I0414 14:29:30.765949 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing default KVM network
	I0414 14:29:30.766029 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found existing private KVM network mk-ha-290859
	I0414 14:29:30.766196 1213155 main.go:141] libmachine: (ha-290859-m02) setting up store path in /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:30.766222 1213155 main.go:141] libmachine: (ha-290859-m02) building disk image from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:29:30.766301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:30.766189 1213531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:30.766373 1213155 main.go:141] libmachine: (ha-290859-m02) Downloading /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:29:31.062543 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.062391 1213531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa...
	I0414 14:29:31.719024 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.718890 1213531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk...
	I0414 14:29:31.719061 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing magic tar header
	I0414 14:29:31.719076 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Writing SSH key tar header
	I0414 14:29:31.719086 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:31.719015 1213531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 ...
	I0414 14:29:31.719187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02
	I0414 14:29:31.719213 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02 (perms=drwx------)
	I0414 14:29:31.719221 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines
	I0414 14:29:31.719232 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:29:31.719239 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:29:31.719270 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1196368
	I0414 14:29:31.719288 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368/.minikube (perms=drwxr-xr-x)
	I0414 14:29:31.719298 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:29:31.719315 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home/jenkins
	I0414 14:29:31.719326 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | checking permissions on dir: /home
	I0414 14:29:31.719336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | skipping /home - not owner
	I0414 14:29:31.719349 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration/20512-1196368 (perms=drwxrwxr-x)
	I0414 14:29:31.719368 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:29:31.719380 1213155 main.go:141] libmachine: (ha-290859-m02) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:29:31.719386 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:31.720303 1213155 main.go:141] libmachine: (ha-290859-m02) define libvirt domain using xml: 
	I0414 14:29:31.720321 1213155 main.go:141] libmachine: (ha-290859-m02) <domain type='kvm'>
	I0414 14:29:31.720330 1213155 main.go:141] libmachine: (ha-290859-m02)   <name>ha-290859-m02</name>
	I0414 14:29:31.720338 1213155 main.go:141] libmachine: (ha-290859-m02)   <memory unit='MiB'>2200</memory>
	I0414 14:29:31.720346 1213155 main.go:141] libmachine: (ha-290859-m02)   <vcpu>2</vcpu>
	I0414 14:29:31.720352 1213155 main.go:141] libmachine: (ha-290859-m02)   <features>
	I0414 14:29:31.720359 1213155 main.go:141] libmachine: (ha-290859-m02)     <acpi/>
	I0414 14:29:31.720364 1213155 main.go:141] libmachine: (ha-290859-m02)     <apic/>
	I0414 14:29:31.720371 1213155 main.go:141] libmachine: (ha-290859-m02)     <pae/>
	I0414 14:29:31.720381 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720411 1213155 main.go:141] libmachine: (ha-290859-m02)   </features>
	I0414 14:29:31.720433 1213155 main.go:141] libmachine: (ha-290859-m02)   <cpu mode='host-passthrough'>
	I0414 14:29:31.720452 1213155 main.go:141] libmachine: (ha-290859-m02)   
	I0414 14:29:31.720461 1213155 main.go:141] libmachine: (ha-290859-m02)   </cpu>
	I0414 14:29:31.720488 1213155 main.go:141] libmachine: (ha-290859-m02)   <os>
	I0414 14:29:31.720507 1213155 main.go:141] libmachine: (ha-290859-m02)     <type>hvm</type>
	I0414 14:29:31.720537 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='cdrom'/>
	I0414 14:29:31.720559 1213155 main.go:141] libmachine: (ha-290859-m02)     <boot dev='hd'/>
	I0414 14:29:31.720572 1213155 main.go:141] libmachine: (ha-290859-m02)     <bootmenu enable='no'/>
	I0414 14:29:31.720587 1213155 main.go:141] libmachine: (ha-290859-m02)   </os>
	I0414 14:29:31.720597 1213155 main.go:141] libmachine: (ha-290859-m02)   <devices>
	I0414 14:29:31.720609 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='cdrom'>
	I0414 14:29:31.720626 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/boot2docker.iso'/>
	I0414 14:29:31.720637 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hdc' bus='scsi'/>
	I0414 14:29:31.720649 1213155 main.go:141] libmachine: (ha-290859-m02)       <readonly/>
	I0414 14:29:31.720659 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720668 1213155 main.go:141] libmachine: (ha-290859-m02)     <disk type='file' device='disk'>
	I0414 14:29:31.720684 1213155 main.go:141] libmachine: (ha-290859-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:29:31.720699 1213155 main.go:141] libmachine: (ha-290859-m02)       <source file='/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/ha-290859-m02.rawdisk'/>
	I0414 14:29:31.720732 1213155 main.go:141] libmachine: (ha-290859-m02)       <target dev='hda' bus='virtio'/>
	I0414 14:29:31.720746 1213155 main.go:141] libmachine: (ha-290859-m02)     </disk>
	I0414 14:29:31.720756 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720768 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='mk-ha-290859'/>
	I0414 14:29:31.720777 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720788 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720799 1213155 main.go:141] libmachine: (ha-290859-m02)     <interface type='network'>
	I0414 14:29:31.720809 1213155 main.go:141] libmachine: (ha-290859-m02)       <source network='default'/>
	I0414 14:29:31.720821 1213155 main.go:141] libmachine: (ha-290859-m02)       <model type='virtio'/>
	I0414 14:29:31.720835 1213155 main.go:141] libmachine: (ha-290859-m02)     </interface>
	I0414 14:29:31.720844 1213155 main.go:141] libmachine: (ha-290859-m02)     <serial type='pty'>
	I0414 14:29:31.720855 1213155 main.go:141] libmachine: (ha-290859-m02)       <target port='0'/>
	I0414 14:29:31.720865 1213155 main.go:141] libmachine: (ha-290859-m02)     </serial>
	I0414 14:29:31.720875 1213155 main.go:141] libmachine: (ha-290859-m02)     <console type='pty'>
	I0414 14:29:31.720886 1213155 main.go:141] libmachine: (ha-290859-m02)       <target type='serial' port='0'/>
	I0414 14:29:31.720896 1213155 main.go:141] libmachine: (ha-290859-m02)     </console>
	I0414 14:29:31.720909 1213155 main.go:141] libmachine: (ha-290859-m02)     <rng model='virtio'>
	I0414 14:29:31.720943 1213155 main.go:141] libmachine: (ha-290859-m02)       <backend model='random'>/dev/random</backend>
	I0414 14:29:31.720956 1213155 main.go:141] libmachine: (ha-290859-m02)     </rng>
	I0414 14:29:31.720962 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720972 1213155 main.go:141] libmachine: (ha-290859-m02)     
	I0414 14:29:31.720978 1213155 main.go:141] libmachine: (ha-290859-m02)   </devices>
	I0414 14:29:31.720993 1213155 main.go:141] libmachine: (ha-290859-m02) </domain>
	I0414 14:29:31.721002 1213155 main.go:141] libmachine: (ha-290859-m02) 
	I0414 14:29:31.727524 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:76:01:7d in network default
	I0414 14:29:31.728172 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:31.728187 1213155 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:29:31.728195 1213155 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:29:31.728896 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:29:31.729170 1213155 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:29:31.729521 1213155 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:29:31.730489 1213155 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:29:32.993969 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:29:32.996009 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:32.996441 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:32.996505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:32.996448 1213531 retry.go:31] will retry after 202.522594ms: waiting for domain to come up
	I0414 14:29:33.201175 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.201705 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.201751 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.201682 1213531 retry.go:31] will retry after 346.96007ms: waiting for domain to come up
	I0414 14:29:33.550485 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.550900 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.550931 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.550863 1213531 retry.go:31] will retry after 407.207189ms: waiting for domain to come up
	I0414 14:29:33.959550 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:33.960116 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:33.960149 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:33.960094 1213531 retry.go:31] will retry after 434.401549ms: waiting for domain to come up
	I0414 14:29:34.395749 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.396217 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.396267 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.396208 1213531 retry.go:31] will retry after 552.547121ms: waiting for domain to come up
	I0414 14:29:34.949860 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:34.950310 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:34.950344 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:34.950269 1213531 retry.go:31] will retry after 848.939274ms: waiting for domain to come up
	I0414 14:29:35.800706 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:35.801275 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:35.801301 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:35.801229 1213531 retry.go:31] will retry after 1.078619357s: waiting for domain to come up
	I0414 14:29:36.881700 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:36.882163 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:36.882187 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:36.882128 1213531 retry.go:31] will retry after 1.079210669s: waiting for domain to come up
	I0414 14:29:37.963455 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:37.963935 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:37.963969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:37.963899 1213531 retry.go:31] will retry after 1.194058186s: waiting for domain to come up
	I0414 14:29:39.160481 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:39.160993 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:39.161031 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:39.160949 1213531 retry.go:31] will retry after 1.513626688s: waiting for domain to come up
	I0414 14:29:40.676551 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:40.677038 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:40.677071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:40.677004 1213531 retry.go:31] will retry after 1.924347004s: waiting for domain to come up
	I0414 14:29:42.603644 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:42.604168 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:42.604192 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:42.604145 1213531 retry.go:31] will retry after 2.797639018s: waiting for domain to come up
	I0414 14:29:45.405004 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:45.405658 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:45.405688 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:45.405627 1213531 retry.go:31] will retry after 2.864814671s: waiting for domain to come up
	I0414 14:29:48.274060 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:48.274518 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:29:48.274591 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:29:48.274508 1213531 retry.go:31] will retry after 4.611052523s: waiting for domain to come up
	I0414 14:29:52.886693 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887068 1213155 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:29:52.887093 1213155 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:29:52.887105 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.887506 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859
	I0414 14:29:52.966052 1213155 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:29:52.966083 1213155 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:29:52.966091 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:29:52.968665 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969034 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:52.969082 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:52.969208 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:29:52.969231 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:29:52.969263 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:29:52.969282 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:29:52.969295 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:29:53.095336 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:29:53.095545 1213155 main.go:141] libmachine: (ha-290859-m02) KVM machine creation complete
	I0414 14:29:53.095910 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:53.096462 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096622 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:53.096806 1213155 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:29:53.096820 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:29:53.098070 1213155 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:29:53.098085 1213155 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:29:53.098090 1213155 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:29:53.098095 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.100244 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100649 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.100680 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.100852 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.101066 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101236 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.101372 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.101519 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.101769 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.101782 1213155 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:29:53.206593 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.206617 1213155 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:29:53.206628 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.209588 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.209969 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.209988 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.210187 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.210382 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210544 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.210717 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.210971 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.211192 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.211205 1213155 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:29:53.315888 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:29:53.315980 1213155 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:29:53.315990 1213155 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:29:53.316001 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316277 1213155 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:29:53.316306 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.316451 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.319393 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319803 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.319837 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.319946 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.320140 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320321 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.320450 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.320602 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.320806 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.320818 1213155 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:29:53.442594 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:29:53.442629 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.445561 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.445918 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.445944 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.446150 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.446351 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446528 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.446678 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.446833 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:53.447038 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:53.447053 1213155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:29:53.559946 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:29:53.559988 1213155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:29:53.560014 1213155 buildroot.go:174] setting up certificates
	I0414 14:29:53.560031 1213155 provision.go:84] configureAuth start
	I0414 14:29:53.560046 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:29:53.560377 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:53.562822 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563207 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.563237 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.563574 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.566107 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566478 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.566505 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.566628 1213155 provision.go:143] copyHostCerts
	I0414 14:29:53.566676 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566716 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:29:53.566730 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:29:53.566839 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:29:53.566954 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.566979 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:29:53.566987 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:29:53.567026 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:29:53.567106 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567130 1213155 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:29:53.567137 1213155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:29:53.567173 1213155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:29:53.567293 1213155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:29:53.976110 1213155 provision.go:177] copyRemoteCerts
	I0414 14:29:53.976184 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:29:53.976219 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:53.978798 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979170 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:53.979202 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:53.979355 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:53.979571 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:53.979771 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:53.979950 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.060926 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:29:54.061020 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:29:54.083723 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:29:54.083818 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:29:54.106702 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:29:54.106773 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:29:54.128136 1213155 provision.go:87] duration metric: took 568.088664ms to configureAuth
	I0414 14:29:54.128177 1213155 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:29:54.128372 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:54.128400 1213155 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:29:54.128413 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetURL
	I0414 14:29:54.129571 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | using libvirt version 6000000
	I0414 14:29:54.131690 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132071 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.132095 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.132296 1213155 main.go:141] libmachine: Docker is up and running!
	I0414 14:29:54.132311 1213155 main.go:141] libmachine: Reticulating splines...
	I0414 14:29:54.132318 1213155 client.go:171] duration metric: took 23.368636066s to LocalClient.Create
	I0414 14:29:54.132344 1213155 start.go:167] duration metric: took 23.368708618s to libmachine.API.Create "ha-290859"
	I0414 14:29:54.132356 1213155 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:29:54.132370 1213155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:29:54.132394 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.132652 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:29:54.132681 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.134726 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135119 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.135146 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.135312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.135512 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.135648 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.135782 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.217134 1213155 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:29:54.221237 1213155 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:29:54.221265 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:29:54.221324 1213155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:29:54.221392 1213155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:29:54.221401 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:29:54.221495 1213155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:29:54.230111 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:54.253934 1213155 start.go:296] duration metric: took 121.560617ms for postStartSetup
	I0414 14:29:54.253995 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:29:54.254683 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.257374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.257778 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.257811 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.258118 1213155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:29:54.258332 1213155 start.go:128] duration metric: took 23.513984018s to createHost
	I0414 14:29:54.258362 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.260873 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261257 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.261285 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.261448 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.261638 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261821 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.261984 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.262185 1213155 main.go:141] libmachine: Using SSH client type: native
	I0414 14:29:54.262369 1213155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:29:54.262379 1213155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:29:54.367727 1213155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640994.343893226
	
	I0414 14:29:54.367759 1213155 fix.go:216] guest clock: 1744640994.343893226
	I0414 14:29:54.367766 1213155 fix.go:229] Guest: 2025-04-14 14:29:54.343893226 +0000 UTC Remote: 2025-04-14 14:29:54.258346943 +0000 UTC m=+69.442509295 (delta=85.546283ms)
	I0414 14:29:54.367782 1213155 fix.go:200] guest clock delta is within tolerance: 85.546283ms
	I0414 14:29:54.367788 1213155 start.go:83] releasing machines lock for "ha-290859-m02", held for 23.623550564s
	I0414 14:29:54.367807 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.368115 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:54.370975 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.371432 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.371462 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.373758 1213155 out.go:177] * Found network options:
	I0414 14:29:54.375127 1213155 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:29:54.376278 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.376312 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.376913 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377127 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:29:54.377268 1213155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:29:54.377316 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:29:54.377370 1213155 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:29:54.377457 1213155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:29:54.377481 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:29:54.380102 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380374 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380406 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380429 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380578 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.380741 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.380859 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:54.380897 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:54.380909 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381045 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:29:54.381125 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:29:54.381305 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:29:54.381467 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:29:54.381614 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:29:54.458225 1213155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:29:54.458308 1213155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:29:54.490449 1213155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:29:54.490475 1213155 start.go:495] detecting cgroup driver to use...
	I0414 14:29:54.490555 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:29:54.524660 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:29:54.537871 1213155 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:29:54.537936 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:29:54.549801 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:29:54.562203 1213155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:29:54.666348 1213155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:29:54.786710 1213155 docker.go:233] disabling docker service ...
	I0414 14:29:54.786789 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:29:54.800092 1213155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:29:54.812105 1213155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:29:54.936777 1213155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:29:55.059002 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:29:55.072980 1213155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:29:55.089970 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:29:55.099362 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:29:55.108681 1213155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:29:55.108766 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:29:55.118203 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.127402 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:29:55.136483 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:29:55.145554 1213155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:29:55.154769 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:29:55.163700 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:29:55.172612 1213155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:29:55.181597 1213155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:29:55.189962 1213155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:29:55.190019 1213155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:29:55.202112 1213155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:29:55.210883 1213155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:29:55.319480 1213155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:29:55.344914 1213155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:29:55.345008 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:55.349081 1213155 retry.go:31] will retry after 1.00520308s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:29:56.354657 1213155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:29:56.359600 1213155 start.go:563] Will wait 60s for crictl version
	I0414 14:29:56.359685 1213155 ssh_runner.go:195] Run: which crictl
	I0414 14:29:56.363336 1213155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:29:56.403201 1213155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:29:56.403312 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.430179 1213155 ssh_runner.go:195] Run: containerd --version
	I0414 14:29:56.454598 1213155 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:29:56.455785 1213155 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:29:56.456735 1213155 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:29:56.459280 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459661 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:29:45 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:29:56.459691 1213155 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:29:56.459901 1213155 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:29:56.463673 1213155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:29:56.475057 1213155 mustload.go:65] Loading cluster: ha-290859
	I0414 14:29:56.475248 1213155 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:29:56.475557 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.475600 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.490597 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0414 14:29:56.491136 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.491690 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.491711 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.492119 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.492309 1213155 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:29:56.493794 1213155 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:29:56.494134 1213155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:29:56.494173 1213155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:29:56.509360 1213155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0414 14:29:56.509774 1213155 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:29:56.510229 1213155 main.go:141] libmachine: Using API Version  1
	I0414 14:29:56.510256 1213155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:29:56.510618 1213155 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:29:56.510840 1213155 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:29:56.511031 1213155 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:29:56.511044 1213155 certs.go:194] generating shared ca certs ...
	I0414 14:29:56.511057 1213155 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.511177 1213155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:29:56.511226 1213155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:29:56.511236 1213155 certs.go:256] generating profile certs ...
	I0414 14:29:56.511347 1213155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:29:56.511373 1213155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:29:56.511386 1213155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:29:56.589532 1213155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e ...
	I0414 14:29:56.589564 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e: {Name:mk9fb7b2adad4a62e9ebf1f50826b8647aaaa2d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589727 1213155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e ...
	I0414 14:29:56.589740 1213155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e: {Name:mk7ad07038879568d4a23c2fb5c04f12405eb02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:29:56.589811 1213155 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:29:56.589948 1213155 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:29:56.590096 1213155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:29:56.590118 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:29:56.590137 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:29:56.590151 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:29:56.590162 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:29:56.590180 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:29:56.590198 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:29:56.590211 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:29:56.590220 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:29:56.590271 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:29:56.590298 1213155 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:29:56.590308 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:29:56.590327 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:29:56.590346 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:29:56.590368 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:29:56.590404 1213155 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:29:56.590430 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:56.590446 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:29:56.590457 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:29:56.590494 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:29:56.593379 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593755 1213155 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:28:59 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:29:56.593777 1213155 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:29:56.593996 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:29:56.594232 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:29:56.594405 1213155 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:29:56.594540 1213155 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:29:56.671687 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:29:56.677338 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:29:56.689003 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:29:56.693487 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:29:56.704430 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:29:56.708650 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:29:56.719039 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:29:56.723166 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:29:56.734152 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:29:56.738243 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:29:56.749081 1213155 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:29:56.753248 1213155 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:29:56.764073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:29:56.788198 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:29:56.813073 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:29:56.835958 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:29:56.859645 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 14:29:56.882879 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 14:29:56.906187 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:29:56.928932 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:29:56.952365 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:29:56.974920 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:29:56.998466 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:29:57.022704 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:29:57.038828 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:29:57.054237 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:29:57.069513 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:29:57.085532 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:29:57.101522 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:29:57.117372 1213155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:29:57.132827 1213155 ssh_runner.go:195] Run: openssl version
	I0414 14:29:57.138331 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:29:57.148324 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152469 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.152557 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:29:57.158279 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:29:57.169126 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:29:57.179995 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184265 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.184340 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:29:57.189810 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:29:57.199987 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:29:57.210177 1213155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214740 1213155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.214815 1213155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:29:57.221853 1213155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:29:57.232248 1213155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:29:57.236270 1213155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:29:57.236327 1213155 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:29:57.236439 1213155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:29:57.236473 1213155 kube-vip.go:115] generating kube-vip config ...
	I0414 14:29:57.236525 1213155 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:29:57.252239 1213155 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:29:57.252336 1213155 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:29:57.252412 1213155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.262218 1213155 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0414 14:29:57.262295 1213155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0414 14:29:57.271580 1213155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0414 14:29:57.271599 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0414 14:29:57.271617 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.271622 1213155 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0414 14:29:57.271681 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0414 14:29:57.275804 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0414 14:29:57.275835 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0414 14:29:58.408400 1213155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:29:58.423781 1213155 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.423898 1213155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0414 14:29:58.428378 1213155 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0414 14:29:58.428415 1213155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0414 14:29:58.749359 1213155 out.go:201] 
	W0414 14:29:58.750775 1213155 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubeadm: download failed: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 Dst:/home/jenkins/minikube-integration/20512-1196368/.minikube/cache/linux/amd64/v1.32.2/kubeadm.download Pwd: Mode:2 Umask:---------- Detectors:[0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0 0x5c5ece0] Decompressors:map[bz2:0xc0004c8690 gz:0xc0004c8698 tar:0xc0004c8610 tar.bz2:0xc0004c8620 tar.gz:0xc0004c8630 tar.xz:0xc0004c8650 tar.zst:0xc0004c8660 tbz2:0xc0004c8620 tgz:0xc0004c8630 txz:0xc0004c8650 tzst:0xc0004c8660 xz:0xc0004c8700 zip:0xc0004c8720 zst:0xc0004c8708] Getters:map[file:0xc00216a250 http:
0xc00012c550 https:0xc00012c5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:60586->151.101.193.55:443: read: connection reset by peer
	W0414 14:29:58.750801 1213155 out.go:270] * 
	W0414 14:29:58.751639 1213155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:29:58.753070 1213155 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24e6d7cfe7ea4       8c811b4aec35f       18 minutes ago      Running             busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       18 minutes ago      Running             coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       18 minutes ago      Running             coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	922f97d06563e       6e38f40d628db       18 minutes ago      Running             storage-provisioner       0                   4de376d34ee7f       storage-provisioner
	2df8ccb8d6ed9       df3849d954c98       18 minutes ago      Running             kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       18 minutes ago      Running             kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	9914f8879fc43       6ff023a402a69       18 minutes ago      Running             kube-vip                  0                   7b4e857fc4a72       kube-vip-ha-290859
	8263b35014337       b6a454c5a800d       19 minutes ago      Running             kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       19 minutes ago      Running             kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       19 minutes ago      Running             etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       19 minutes ago      Running             kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.168944603Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.181036869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnl6q,Uid:a590080d-c4b1-4697-9849-ae6130e483a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.186359489Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.209760426Z" level=info msg="CreateContainer within sandbox \"e56d2e4c87eea2d527e5c301e33c596e4ec4533b17e49248e3c35eeb66f90f11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.212826022Z" level=info msg="StartContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.215681811Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\""
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.285830032Z" level=info msg="StartContainer for \"0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f\" returns successfully"
	Apr 14 14:29:45 ha-290859 containerd[643]: time="2025-04-14T14:29:45.294639585Z" level=info msg="StartContainer for \"731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0\" returns successfully"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.131928214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,}"
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218617705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218691310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218706805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.218958691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.281907696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-58667487b6-t6bgg,Uid:bd39f57c-bcb5-4d77-b171-6d4d2f237e54,Namespace:default,Attempt:0,} returns sandbox id \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\""
	Apr 14 14:30:01 ha-290859 containerd[643]: time="2025-04-14T14:30:01.284050999Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.401970091Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.404464641Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.406415797Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.409920833Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411266903Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 2.127171694s"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.411378057Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.414728181Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.437197602Z" level=info msg="CreateContainer within sandbox \"78438e8022143055bed5e2d8a26db130ead88208a68bd14ca25618be3edf24e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.439640223Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\""
	Apr 14 14:30:03 ha-290859 containerd[643]: time="2025-04-14T14:30:03.489937462Z" level=info msg="StartContainer for \"24e6d7cfe7ea4490a4e08a40f32b9cf717c4d83060631102c580d6adf2fc47f5\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:48:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:47:25 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:47:25 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:47:25 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:47:25 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    357ae105-a7f9-47b1-bf31-1c1aadedfe92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal  NodeReady                18m   kubelet          Node ha-290859 status is now: NodeReady
	
	
	Name:               ha-290859-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_14T14_42_30_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:42:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:48:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:42:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-290859-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96e9da9bd9e1490583702338b88b0c23
	  System UUID:                96e9da9b-d9e1-4905-8370-2338b88b0c23
	  Boot ID:                    b2600615-03c7-4984-8138-73f9baedc04e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8bg2x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kindnet-4jz25               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m52s
	  kube-system                 kube-proxy-sp56w            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet          Node ha-290859-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m48s                  node-controller  Node ha-290859-m03 event: Registered Node ha-290859-m03 in Controller
	  Normal  NodeReady                5m32s                  kubelet          Node ha-290859-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051284] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.968563] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr14 14:29] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.058894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059786] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +0.183634] systemd-fstab-generator[532]: Ignoring "noauto" option for root device
	[  +0.109211] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +0.261328] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +4.868852] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.061817] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.541337] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +4.433977] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.054755] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.040196] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.092655] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.133260] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.332004] kauditd_printk_skb: 23 callbacks suppressed
	[Apr14 14:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.940910Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:29:20.941291Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.941327Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	{"level":"info","ts":"2025-04-14T14:42:12.425594Z","caller":"traceutil/trace.go:171","msg":"trace[593749251] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1973; }","duration":"103.549581ms","start":"2025-04-14T14:42:12.322004Z","end":"2025-04-14T14:42:12.425554Z","steps":["trace[593749251] 'read index received'  (duration: 102.720139ms)","trace[593749251] 'applied index is now lower than readState.Index'  (duration: 828.805µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:42:12.426144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.759593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:42:12.426196Z","caller":"traceutil/trace.go:171","msg":"trace[257637869] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1805; }","duration":"104.23976ms","start":"2025-04-14T14:42:12.321948Z","end":"2025-04-14T14:42:12.426188Z","steps":["trace[257637869] 'agreement among raft nodes before linearized reading'  (duration: 103.769974ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:42:12.425685Z","caller":"traceutil/trace.go:171","msg":"trace[874985590] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"128.996586ms","start":"2025-04-14T14:42:12.296675Z","end":"2025-04-14T14:42:12.425672Z","steps":["trace[874985590] 'process raft request'  (duration: 128.079961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:42:29.811595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.362023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932452365827166964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:25989634b465d2f3>","response":"size:42"}
	{"level":"info","ts":"2025-04-14T14:44:20.976766Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1495}
	{"level":"info","ts":"2025-04-14T14:44:20.980966Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1495,"took":"3.550898ms","hash":2769383186,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2031616,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-04-14T14:44:20.981013Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2769383186,"revision":1495,"compact-revision":955}
	
	
	==> kernel <==
	 14:48:21 up 19 min,  0 users,  load average: 0.08, 0.12, 0.09
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:47:14.502438       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:47:24.506455       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:47:24.506598       1 main.go:301] handling current node
	I0414 14:47:24.506633       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:47:24.506642       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:47:34.500917       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:47:34.501000       1 main.go:301] handling current node
	I0414 14:47:34.501038       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:47:34.501044       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:47:44.501996       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:47:44.502048       1 main.go:301] handling current node
	I0414 14:47:44.502077       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:47:44.502088       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:47:54.500375       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:47:54.500405       1 main.go:301] handling current node
	I0414 14:47:54.500419       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:47:54.500423       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:48:04.500374       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:48:04.500422       1 main.go:301] handling current node
	I0414 14:48:04.500440       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:48:04.500446       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:48:14.504752       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:48:14.505066       1 main.go:301] handling current node
	I0414 14:48:14.505143       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:48:14.505277       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:42:20.033463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:42:29.935163       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-290859-m03\" does not exist"
	I0414 14:42:29.948852       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-290859-m03" podCIDRs=["10.244.1.0/24"]
	I0414 14:42:29.949152       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.949831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.958386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="234.248µs"
	I0414 14:42:29.963750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.969981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.002µs"
	I0414 14:42:30.275380       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:30.614411       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:33.964410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-290859-m03"
	I0414 14:42:34.046665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:39.961881       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.191468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-290859-m03"
	I0414 14:42:49.192361       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.201252       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.216690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.679µs"
	I0414 14:42:49.217122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.948µs"
	I0414 14:42:49.230018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.053µs"
	I0414 14:42:52.664944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.387962ms"
	I0414 14:42:52.665652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.546µs"
	I0414 14:42:53.979890       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:43:00.010906       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:46:33.503243       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:47:25.635375       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:43:25 ha-290859 kubelet[1300]: E0414 14:43:25.692316    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:43:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:43:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:43:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:43:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:44:25 ha-290859 kubelet[1300]: E0414 14:44:25.693018    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:44:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:44:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:44:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:44:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:45:25 ha-290859 kubelet[1300]: E0414 14:45:25.692785    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:45:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:45:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:45:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:45:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:46:25 ha-290859 kubelet[1300]: E0414 14:46:25.693088    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:46:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:46:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:46:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:46:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:47:25 ha-290859 kubelet[1300]: E0414 14:47:25.692664    1300 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:47:25 ha-290859 kubelet[1300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:47:25 ha-290859 kubelet[1300]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:47:25 ha-290859 kubelet[1300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:47:25 ha-290859 kubelet[1300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  7m55s (x3 over 18m)    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  5m43s (x2 over 5m52s)  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  5m22s (x2 over 5m32s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (473.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-290859 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-290859 -v=7 --alsologtostderr
E0414 14:49:22.644172 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:50:58.679388 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-290859 -v=7 --alsologtostderr: (3m2.941056062s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-290859 --wait=true -v=7 --alsologtostderr
E0414 14:52:59.574945 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:55:58.679502 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-290859 --wait=true -v=7 --alsologtostderr: exit status 80 (4m47.231066741s)

                                                
                                                
-- stdout --
	* [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	* Restarting existing kvm2 VM for "ha-290859" ...
	* Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	* Enabled addons: 
	
	* Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	* Restarting existing kvm2 VM for "ha-290859-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.110
	* Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	  - env NO_PROXY=192.168.39.110
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:51:24.924385 1221070 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:51:24.924621 1221070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:51:24.924629 1221070 out.go:358] Setting ErrFile to fd 2...
	I0414 14:51:24.924633 1221070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:51:24.924808 1221070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:51:24.925345 1221070 out.go:352] Setting JSON to false
	I0414 14:51:24.926340 1221070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":23628,"bootTime":1744618657,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:51:24.926457 1221070 start.go:139] virtualization: kvm guest
	I0414 14:51:24.928287 1221070 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:51:24.929459 1221070 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:51:24.929469 1221070 notify.go:220] Checking for updates...
	I0414 14:51:24.931737 1221070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:51:24.933068 1221070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:24.934102 1221070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:51:24.935103 1221070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:51:24.936089 1221070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:51:24.937496 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:24.937602 1221070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:51:24.938128 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:24.938198 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:24.954244 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0414 14:51:24.954880 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:24.955464 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:24.955489 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:24.955900 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:24.956117 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:24.990242 1221070 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:51:24.991319 1221070 start.go:297] selected driver: kvm2
	I0414 14:51:24.991332 1221070 start.go:901] validating driver "kvm2" against &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-29
0859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:24.991491 1221070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:51:24.991827 1221070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:51:24.991902 1221070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:51:25.007424 1221070 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:51:25.008082 1221070 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:51:25.008124 1221070 cni.go:84] Creating CNI manager for ""
	I0414 14:51:25.008189 1221070 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0414 14:51:25.008244 1221070 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fal
se kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:25.008400 1221070 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:51:25.010019 1221070 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:51:25.011347 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:25.011399 1221070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:51:25.011409 1221070 cache.go:56] Caching tarball of preloaded images
	I0414 14:51:25.011488 1221070 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:51:25.011498 1221070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:51:25.011617 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:25.011799 1221070 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:51:25.011840 1221070 start.go:364] duration metric: took 23.649µs to acquireMachinesLock for "ha-290859"
	I0414 14:51:25.011855 1221070 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:51:25.011862 1221070 fix.go:54] fixHost starting: 
	I0414 14:51:25.012121 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:25.012156 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:25.026599 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0414 14:51:25.027122 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:25.027660 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:25.027688 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:25.028011 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:25.028229 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:25.028380 1221070 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:51:25.030231 1221070 fix.go:112] recreateIfNeeded on ha-290859: state=Stopped err=<nil>
	I0414 14:51:25.030265 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	W0414 14:51:25.030457 1221070 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:51:25.032663 1221070 out.go:177] * Restarting existing kvm2 VM for "ha-290859" ...
	I0414 14:51:25.033815 1221070 main.go:141] libmachine: (ha-290859) Calling .Start
	I0414 14:51:25.034026 1221070 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:51:25.034048 1221070 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:51:25.034729 1221070 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:51:25.035067 1221070 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:51:25.035424 1221070 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:51:25.036088 1221070 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:51:26.234459 1221070 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:51:26.235587 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.236072 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.236210 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.236086 1221099 retry.go:31] will retry after 280.740636ms: waiting for domain to come up
	I0414 14:51:26.518687 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.519197 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.519215 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.519169 1221099 retry.go:31] will retry after 243.427688ms: waiting for domain to come up
	I0414 14:51:26.765118 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.765534 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.765582 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.765501 1221099 retry.go:31] will retry after 427.840973ms: waiting for domain to come up
	I0414 14:51:27.195132 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:27.195585 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:27.195651 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:27.195569 1221099 retry.go:31] will retry after 469.259994ms: waiting for domain to come up
	I0414 14:51:27.666308 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:27.666685 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:27.666712 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:27.666664 1221099 retry.go:31] will retry after 657.912219ms: waiting for domain to come up
	I0414 14:51:28.326528 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:28.326927 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:28.326955 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:28.326878 1221099 retry.go:31] will retry after 750.684746ms: waiting for domain to come up
	I0414 14:51:29.078742 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:29.079136 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:29.079161 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:29.079097 1221099 retry.go:31] will retry after 1.04198738s: waiting for domain to come up
	I0414 14:51:30.122400 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:30.122774 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:30.122798 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:30.122735 1221099 retry.go:31] will retry after 1.397183101s: waiting for domain to come up
	I0414 14:51:31.522268 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:31.522683 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:31.522709 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:31.522652 1221099 retry.go:31] will retry after 1.778850774s: waiting for domain to come up
	I0414 14:51:33.303491 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:33.303831 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:33.303859 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:33.303809 1221099 retry.go:31] will retry after 2.116605484s: waiting for domain to come up
	I0414 14:51:35.422345 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:35.422804 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:35.422863 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:35.422810 1221099 retry.go:31] will retry after 2.695384495s: waiting for domain to come up
	I0414 14:51:38.120436 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:38.120841 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:38.120862 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:38.120804 1221099 retry.go:31] will retry after 2.291586599s: waiting for domain to come up
	I0414 14:51:40.414425 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:40.414781 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:40.414804 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:40.414750 1221099 retry.go:31] will retry after 4.202133346s: waiting for domain to come up
	I0414 14:51:44.622185 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.622671 1221070 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:51:44.622701 1221070 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:51:44.622714 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.623272 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.623307 1221070 main.go:141] libmachine: (ha-290859) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"}
	I0414 14:51:44.623333 1221070 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:51:44.623346 1221070 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:51:44.623353 1221070 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:51:44.625584 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.625894 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.625919 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.626118 1221070 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:51:44.626160 1221070 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:51:44.626206 1221070 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:51:44.626228 1221070 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:51:44.626236 1221070 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:51:44.746948 1221070 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:51:44.747341 1221070 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:51:44.748066 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:44.750502 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.750990 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.751020 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.751318 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:44.751530 1221070 machine.go:93] provisionDockerMachine start ...
	I0414 14:51:44.751557 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:44.751774 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.754154 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.754523 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.754549 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.754732 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.754917 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.755086 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.755209 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.755372 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.755592 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.755609 1221070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:51:44.859385 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:51:44.859420 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:44.859703 1221070 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:51:44.859733 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:44.859976 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.862591 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.862947 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.862982 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.863100 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.863336 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.863508 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.863682 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.863853 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.864206 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.864235 1221070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:51:44.980307 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:51:44.980345 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.983477 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.983889 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.983935 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.984061 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.984280 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.984453 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.984640 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.984799 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.985038 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.985053 1221070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:51:45.095107 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:51:45.095137 1221070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:51:45.095159 1221070 buildroot.go:174] setting up certificates
	I0414 14:51:45.095170 1221070 provision.go:84] configureAuth start
	I0414 14:51:45.095189 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:45.095535 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:45.098271 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.098658 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.098683 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.098857 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.101319 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.101590 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.101614 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.101756 1221070 provision.go:143] copyHostCerts
	I0414 14:51:45.101791 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:51:45.101823 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:51:45.101841 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:51:45.101907 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:51:45.101983 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:51:45.102001 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:51:45.102007 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:51:45.102032 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:51:45.102075 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:51:45.102097 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:51:45.102103 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:51:45.102122 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:51:45.102165 1221070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:51:45.257877 1221070 provision.go:177] copyRemoteCerts
	I0414 14:51:45.257960 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:51:45.257996 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.261081 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.261410 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.261440 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.261666 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.261911 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.262125 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.262285 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.340876 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:51:45.340975 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0414 14:51:45.362634 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:51:45.362694 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:51:45.383617 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:51:45.383700 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:51:45.404718 1221070 provision.go:87] duration metric: took 309.531359ms to configureAuth
	I0414 14:51:45.404750 1221070 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:51:45.405030 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:45.405049 1221070 machine.go:96] duration metric: took 653.506288ms to provisionDockerMachine
	I0414 14:51:45.405057 1221070 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:51:45.405066 1221070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:51:45.405099 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.405452 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:51:45.405481 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.408299 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.408642 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.408670 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.408811 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.408995 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.409115 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.409248 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.489101 1221070 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:51:45.493122 1221070 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:51:45.493155 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:51:45.493230 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:51:45.493340 1221070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:51:45.493354 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:51:45.493471 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:51:45.502327 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:51:45.523422 1221070 start.go:296] duration metric: took 118.348669ms for postStartSetup
	I0414 14:51:45.523473 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.523812 1221070 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:51:45.523846 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.526608 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.526952 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.526984 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.527122 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.527317 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.527485 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.527636 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.609005 1221070 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:51:45.609116 1221070 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:51:45.667143 1221070 fix.go:56] duration metric: took 20.655266779s for fixHost
	I0414 14:51:45.667202 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.670139 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.670591 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.670620 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.670836 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.671137 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.671338 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.671522 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.671692 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:45.671935 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:45.671948 1221070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:51:45.775787 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642305.752586107
	
	I0414 14:51:45.775819 1221070 fix.go:216] guest clock: 1744642305.752586107
	I0414 14:51:45.775848 1221070 fix.go:229] Guest: 2025-04-14 14:51:45.752586107 +0000 UTC Remote: 2025-04-14 14:51:45.667180128 +0000 UTC m=+20.782398303 (delta=85.405979ms)
	I0414 14:51:45.775882 1221070 fix.go:200] guest clock delta is within tolerance: 85.405979ms
	I0414 14:51:45.775900 1221070 start.go:83] releasing machines lock for "ha-290859", held for 20.764045917s
	I0414 14:51:45.775923 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.776216 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:45.778889 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.779306 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.779339 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.779531 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780063 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780265 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780372 1221070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:51:45.780417 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.780527 1221070 ssh_runner.go:195] Run: cat /version.json
	I0414 14:51:45.780554 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.783291 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783315 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783676 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.783718 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783821 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.783864 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.783889 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.784002 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.784123 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.784177 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.784299 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.784385 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.784475 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.784588 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.860084 1221070 ssh_runner.go:195] Run: systemctl --version
	I0414 14:51:45.888174 1221070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:51:45.893495 1221070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:51:45.893571 1221070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:51:45.908348 1221070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:51:45.908375 1221070 start.go:495] detecting cgroup driver to use...
	I0414 14:51:45.908446 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:51:45.935942 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:51:45.948409 1221070 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:51:45.948475 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:51:45.960942 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:51:45.974488 1221070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:51:46.086503 1221070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:51:46.230317 1221070 docker.go:233] disabling docker service ...
	I0414 14:51:46.230381 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:51:46.244297 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:51:46.256626 1221070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:51:46.408783 1221070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:51:46.531425 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:51:46.544279 1221070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:51:46.561206 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:51:46.570536 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:51:46.579933 1221070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:51:46.579987 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:51:46.589083 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:51:46.598516 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:51:46.608502 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:51:46.618260 1221070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:51:46.628002 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:51:46.637979 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:51:46.647708 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:51:46.657465 1221070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:51:46.666456 1221070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:51:46.666506 1221070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:51:46.679179 1221070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:51:46.688058 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:51:46.803994 1221070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:51:46.830741 1221070 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:51:46.830851 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:51:46.834666 1221070 retry.go:31] will retry after 684.331118ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:51:47.519413 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:51:47.524753 1221070 start.go:563] Will wait 60s for crictl version
	I0414 14:51:47.524814 1221070 ssh_runner.go:195] Run: which crictl
	I0414 14:51:47.528401 1221070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:51:47.567610 1221070 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:51:47.567684 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:51:47.592654 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:51:47.616410 1221070 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:51:47.617662 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:47.620124 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:47.620497 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:47.620523 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:47.620761 1221070 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:51:47.624661 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:51:47.636875 1221070 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:51:47.637062 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:47.637127 1221070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:51:47.668962 1221070 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:51:47.668993 1221070 containerd.go:534] Images already preloaded, skipping extraction
	I0414 14:51:47.669051 1221070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:51:47.700719 1221070 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:51:47.700748 1221070 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:51:47.700756 1221070 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:51:47.700911 1221070 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:51:47.701015 1221070 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:51:47.733009 1221070 cni.go:84] Creating CNI manager for ""
	I0414 14:51:47.733034 1221070 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0414 14:51:47.733058 1221070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:51:47.733086 1221070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:51:47.733246 1221070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:51:47.733266 1221070 kube-vip.go:115] generating kube-vip config ...
	I0414 14:51:47.733322 1221070 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:51:47.749704 1221070 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:51:47.749841 1221070 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:51:47.749916 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:51:47.759441 1221070 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:51:47.759517 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:51:47.768745 1221070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:51:47.784598 1221070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:51:47.800512 1221070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:51:47.816194 1221070 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:51:47.832579 1221070 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:51:47.836561 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:51:47.848464 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:51:47.961061 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:51:47.977110 1221070 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:51:47.977148 1221070 certs.go:194] generating shared ca certs ...
	I0414 14:51:47.977165 1221070 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:47.977358 1221070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:51:47.977426 1221070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:51:47.977447 1221070 certs.go:256] generating profile certs ...
	I0414 14:51:47.977567 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:51:47.977595 1221070 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d
	I0414 14:51:47.977626 1221070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:51:48.116172 1221070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d ...
	I0414 14:51:48.116203 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d: {Name:mk9edc6f7524dc9ba3b3dee538c59fbd77ccd148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.116397 1221070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d ...
	I0414 14:51:48.116412 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d: {Name:mk18dc0fd4ba99bfeaa95fae1a08a91f3d1054da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.116516 1221070 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:51:48.116679 1221070 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:51:48.116822 1221070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:51:48.116845 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:51:48.116863 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:51:48.116876 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:51:48.116888 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:51:48.116898 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:51:48.116907 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:51:48.116916 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:51:48.116925 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:51:48.116971 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:51:48.117008 1221070 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:51:48.117018 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:51:48.117040 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:51:48.117066 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:51:48.117086 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:51:48.117120 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:51:48.117150 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.117163 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.117173 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.117829 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:51:48.149051 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:51:48.177053 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:51:48.209173 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:51:48.253240 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:51:48.287575 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:51:48.318676 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:51:48.341473 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:51:48.364366 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:51:48.392240 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:51:48.414262 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:51:48.435434 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:51:48.451391 1221070 ssh_runner.go:195] Run: openssl version
	I0414 14:51:48.456643 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:51:48.467055 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.471094 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.471167 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.476620 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:51:48.487041 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:51:48.497119 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.501253 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.501303 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.506464 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:51:48.516670 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:51:48.526675 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.530724 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.530790 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.536779 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:51:48.547496 1221070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:51:48.551752 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 14:51:48.557436 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 14:51:48.563312 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 14:51:48.569039 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 14:51:48.575033 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 14:51:48.580579 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 14:51:48.586320 1221070 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:48.586432 1221070 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:51:48.586516 1221070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:51:48.621007 1221070 cri.go:89] found id: "731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0"
	I0414 14:51:48.621036 1221070 cri.go:89] found id: "0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f"
	I0414 14:51:48.621043 1221070 cri.go:89] found id: "922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b"
	I0414 14:51:48.621047 1221070 cri.go:89] found id: "2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d"
	I0414 14:51:48.621051 1221070 cri.go:89] found id: "e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2"
	I0414 14:51:48.621056 1221070 cri.go:89] found id: "9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d"
	I0414 14:51:48.621059 1221070 cri.go:89] found id: "8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847"
	I0414 14:51:48.621063 1221070 cri.go:89] found id: "3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3"
	I0414 14:51:48.621066 1221070 cri.go:89] found id: "b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c"
	I0414 14:51:48.621076 1221070 cri.go:89] found id: "341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b"
	I0414 14:51:48.621080 1221070 cri.go:89] found id: ""
	I0414 14:51:48.621136 1221070 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0414 14:51:48.634683 1221070 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:51:48Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0414 14:51:48.634779 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:51:48.644649 1221070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 14:51:48.644668 1221070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 14:51:48.644716 1221070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 14:51:48.653466 1221070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:51:48.653918 1221070 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-290859" does not appear in /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.654026 1221070 kubeconfig.go:62] /home/jenkins/minikube-integration/20512-1196368/kubeconfig needs updating (will repair): [kubeconfig missing "ha-290859" cluster setting kubeconfig missing "ha-290859" context setting]
	I0414 14:51:48.654307 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.654727 1221070 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.654871 1221070 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.110:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:51:48.655325 1221070 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:51:48.655343 1221070 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:51:48.655349 1221070 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:51:48.655355 1221070 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:51:48.655383 1221070 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:51:48.655782 1221070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 14:51:48.666379 1221070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0414 14:51:48.666416 1221070 kubeadm.go:597] duration metric: took 21.742146ms to restartPrimaryControlPlane
	I0414 14:51:48.666430 1221070 kubeadm.go:394] duration metric: took 80.118757ms to StartCluster
	I0414 14:51:48.666454 1221070 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.666542 1221070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.667357 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.667681 1221070 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:51:48.667715 1221070 start.go:241] waiting for startup goroutines ...
	I0414 14:51:48.667737 1221070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:51:48.667972 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:48.670730 1221070 out.go:177] * Enabled addons: 
	I0414 14:51:48.671774 1221070 addons.go:514] duration metric: took 4.043718ms for enable addons: enabled=[]
	I0414 14:51:48.671816 1221070 start.go:246] waiting for cluster config update ...
	I0414 14:51:48.671833 1221070 start.go:255] writing updated cluster config ...
	I0414 14:51:48.673542 1221070 out.go:201] 
	I0414 14:51:48.674918 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:48.675012 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:48.676439 1221070 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:51:48.677470 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:48.677501 1221070 cache.go:56] Caching tarball of preloaded images
	I0414 14:51:48.677610 1221070 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:51:48.677625 1221070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:51:48.677734 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:48.677945 1221070 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:51:48.677999 1221070 start.go:364] duration metric: took 29.352µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:51:48.678015 1221070 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:51:48.678023 1221070 fix.go:54] fixHost starting: m02
	I0414 14:51:48.678300 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:48.678338 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:48.694625 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46149
	I0414 14:51:48.695133 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:48.695644 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:48.695672 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:48.696059 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:48.696257 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:51:48.696396 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:51:48.697918 1221070 fix.go:112] recreateIfNeeded on ha-290859-m02: state=Stopped err=<nil>
	I0414 14:51:48.697944 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	W0414 14:51:48.698147 1221070 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:51:48.699709 1221070 out.go:177] * Restarting existing kvm2 VM for "ha-290859-m02" ...
	I0414 14:51:48.700791 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .Start
	I0414 14:51:48.701016 1221070 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:51:48.701037 1221070 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:51:48.701680 1221070 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:51:48.701964 1221070 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:51:48.702320 1221070 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:51:48.703123 1221070 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:51:49.928511 1221070 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:51:49.929302 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:49.929682 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:49.929753 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:49.929668 1221256 retry.go:31] will retry after 213.167481ms: waiting for domain to come up
	I0414 14:51:50.144304 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.144886 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.144914 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.144841 1221256 retry.go:31] will retry after 331.221156ms: waiting for domain to come up
	I0414 14:51:50.477450 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.477938 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.477993 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.477923 1221256 retry.go:31] will retry after 310.58732ms: waiting for domain to come up
	I0414 14:51:50.790523 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.791165 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.791199 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.791085 1221256 retry.go:31] will retry after 545.346683ms: waiting for domain to come up
	I0414 14:51:51.337935 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:51.338399 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:51.338425 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:51.338357 1221256 retry.go:31] will retry after 756.05518ms: waiting for domain to come up
	I0414 14:51:52.096242 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:52.096695 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:52.096730 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:52.096648 1221256 retry.go:31] will retry after 823.090094ms: waiting for domain to come up
	I0414 14:51:52.921657 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:52.922142 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:52.922184 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:52.922101 1221256 retry.go:31] will retry after 970.69668ms: waiting for domain to come up
	I0414 14:51:53.894927 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:53.895561 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:53.895594 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:53.895517 1221256 retry.go:31] will retry after 1.032622919s: waiting for domain to come up
	I0414 14:51:54.929442 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:54.929927 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:54.929952 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:54.929923 1221256 retry.go:31] will retry after 1.334812207s: waiting for domain to come up
	I0414 14:51:56.266967 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:56.267482 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:56.267510 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:56.267455 1221256 retry.go:31] will retry after 1.510894415s: waiting for domain to come up
	I0414 14:51:57.780426 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:57.780971 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:57.781004 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:57.780920 1221256 retry.go:31] will retry after 2.39467668s: waiting for domain to come up
	I0414 14:52:00.177702 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:00.178090 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:52:00.178121 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:52:00.178065 1221256 retry.go:31] will retry after 3.552625428s: waiting for domain to come up
	I0414 14:52:03.732281 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:03.732786 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:52:03.732838 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:52:03.732762 1221256 retry.go:31] will retry after 4.321714949s: waiting for domain to come up
	I0414 14:52:08.057427 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.057990 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.058015 1221070 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:52:08.058030 1221070 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:52:08.058568 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.058598 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"}
	I0414 14:52:08.058616 1221070 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:52:08.058624 1221070 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:52:08.058632 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:52:08.061480 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.061822 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.061855 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.062002 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:52:08.062025 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:52:08.062058 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:52:08.062073 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:52:08.062084 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:52:08.183207 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:52:08.183609 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:52:08.184236 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:08.186802 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.187282 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.187322 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.187609 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:52:08.187825 1221070 machine.go:93] provisionDockerMachine start ...
	I0414 14:52:08.187846 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.188131 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.190391 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.190830 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.190855 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.191024 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.191211 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.191410 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.191557 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.191706 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.192061 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.192080 1221070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:52:08.291480 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:52:08.291525 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.291906 1221070 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:52:08.291946 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.292200 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.295446 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.295895 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.295926 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.296203 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.296433 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.296612 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.296787 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.297073 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.297293 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.297305 1221070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:52:08.410482 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:52:08.410517 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.413198 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.413585 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.413621 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.413794 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.414028 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.414223 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.414369 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.414529 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.414731 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.414746 1221070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:52:08.522305 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:52:08.522338 1221070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:52:08.522355 1221070 buildroot.go:174] setting up certificates
	I0414 14:52:08.522368 1221070 provision.go:84] configureAuth start
	I0414 14:52:08.522377 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.522678 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:08.525718 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.526180 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.526208 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.526396 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.528768 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.529141 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.529174 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.529288 1221070 provision.go:143] copyHostCerts
	I0414 14:52:08.529323 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:52:08.529356 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:52:08.529364 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:52:08.529418 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:52:08.529544 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:52:08.529566 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:52:08.529571 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:52:08.529594 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:52:08.529638 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:52:08.529656 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:52:08.529663 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:52:08.529681 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:52:08.529727 1221070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:52:08.556497 1221070 provision.go:177] copyRemoteCerts
	I0414 14:52:08.556548 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:52:08.556569 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.559078 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.559480 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.559504 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.559685 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.559875 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.560067 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.560219 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.637398 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:52:08.637469 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:52:08.661142 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:52:08.661219 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:52:08.683109 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:52:08.683191 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 14:52:08.705705 1221070 provision.go:87] duration metric: took 183.321321ms to configureAuth
	I0414 14:52:08.705738 1221070 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:52:08.706026 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:08.706045 1221070 machine.go:96] duration metric: took 518.207609ms to provisionDockerMachine
	I0414 14:52:08.706054 1221070 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:52:08.706063 1221070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:52:08.706087 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.706363 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:52:08.706392 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.709099 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.709429 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.709457 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.709689 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.709903 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.710118 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.710263 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.791281 1221070 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:52:08.795310 1221070 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:52:08.795344 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:52:08.795409 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:52:08.795482 1221070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:52:08.795492 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:52:08.795570 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:52:08.806018 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:52:08.828791 1221070 start.go:296] duration metric: took 122.715902ms for postStartSetup
	I0414 14:52:08.828841 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.829192 1221070 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:52:08.829225 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.832093 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.832474 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.832500 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.832687 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.832874 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.833046 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.833191 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.914136 1221070 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:52:08.914227 1221070 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:52:08.970338 1221070 fix.go:56] duration metric: took 20.292306098s for fixHost
	I0414 14:52:08.970422 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.973148 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.973612 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.973662 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.973866 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.974071 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.974273 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.974383 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.974544 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.974752 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.974761 1221070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:52:09.075896 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642329.038020711
	
	I0414 14:52:09.075916 1221070 fix.go:216] guest clock: 1744642329.038020711
	I0414 14:52:09.075924 1221070 fix.go:229] Guest: 2025-04-14 14:52:09.038020711 +0000 UTC Remote: 2025-04-14 14:52:08.970369466 +0000 UTC m=+44.085587632 (delta=67.651245ms)
	I0414 14:52:09.075939 1221070 fix.go:200] guest clock delta is within tolerance: 67.651245ms
	I0414 14:52:09.075944 1221070 start.go:83] releasing machines lock for "ha-290859-m02", held for 20.397936123s
	I0414 14:52:09.075962 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.076232 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:09.079036 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.079425 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.079456 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.081479 1221070 out.go:177] * Found network options:
	I0414 14:52:09.082752 1221070 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:52:09.084044 1221070 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:52:09.084079 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084689 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084887 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084984 1221070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:52:09.085023 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:52:09.085117 1221070 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:52:09.085206 1221070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:52:09.085232 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:09.088187 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088476 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088613 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.088643 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088794 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:09.088903 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.088928 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088974 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:09.089083 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:09.089161 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:09.089227 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:09.089297 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:09.089336 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:09.089483 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:52:09.194292 1221070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:52:09.194439 1221070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:52:09.211568 1221070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:52:09.211600 1221070 start.go:495] detecting cgroup driver to use...
	I0414 14:52:09.211684 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:52:09.239355 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:52:09.252164 1221070 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:52:09.252247 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:52:09.266619 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:52:09.279466 1221070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:52:09.408504 1221070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:52:09.554621 1221070 docker.go:233] disabling docker service ...
	I0414 14:52:09.554705 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:52:09.567849 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:52:09.579882 1221070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:52:09.691627 1221070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:52:09.801979 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:52:09.824437 1221070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:52:09.841408 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:52:09.851062 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:52:09.860777 1221070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:52:09.860826 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:52:09.870133 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:52:09.879955 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:52:09.889567 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:52:09.899405 1221070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:52:09.909754 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:52:09.919673 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:52:09.929572 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:52:09.939053 1221070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:52:09.947490 1221070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:52:09.947546 1221070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:52:09.959627 1221070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:52:09.968379 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:10.086027 1221070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:52:10.118333 1221070 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:52:10.118430 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:52:10.122969 1221070 retry.go:31] will retry after 818.918333ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:52:10.943062 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:52:10.948132 1221070 start.go:563] Will wait 60s for crictl version
	I0414 14:52:10.948196 1221070 ssh_runner.go:195] Run: which crictl
	I0414 14:52:10.952231 1221070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:52:10.988005 1221070 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:52:10.988097 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:52:11.012963 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:52:11.038206 1221070 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:52:11.039588 1221070 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:52:11.040724 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:11.043716 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:11.044108 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:11.044129 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:11.044384 1221070 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:52:11.048381 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:52:11.060281 1221070 mustload.go:65] Loading cluster: ha-290859
	I0414 14:52:11.060535 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:11.060920 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:52:11.060972 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:52:11.076673 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0414 14:52:11.077200 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:52:11.077672 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:52:11.077694 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:52:11.078067 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:52:11.078244 1221070 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:52:11.079808 1221070 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:52:11.080127 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:52:11.080174 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:52:11.095417 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37849
	I0414 14:52:11.095844 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:52:11.096258 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:52:11.096277 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:52:11.096639 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:52:11.096826 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:52:11.096989 1221070 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:52:11.097003 1221070 certs.go:194] generating shared ca certs ...
	I0414 14:52:11.097029 1221070 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:52:11.097193 1221070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:52:11.097269 1221070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:52:11.097285 1221070 certs.go:256] generating profile certs ...
	I0414 14:52:11.097381 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:52:11.097463 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:52:11.097524 1221070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:52:11.097538 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:52:11.097560 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:52:11.097577 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:52:11.097593 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:52:11.097611 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:52:11.097629 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:52:11.097646 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:52:11.097662 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:52:11.097724 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:52:11.097762 1221070 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:52:11.097777 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:52:11.097809 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:52:11.097839 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:52:11.097866 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:52:11.097945 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:52:11.097992 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.098014 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.098038 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.098070 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:52:11.100966 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:52:11.101386 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:52:11.101405 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:52:11.101550 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:52:11.101731 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:52:11.101862 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:52:11.102010 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:52:11.175602 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:52:11.180006 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:52:11.189968 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:52:11.193728 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:52:11.203099 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:52:11.207009 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:52:11.216071 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:52:11.219518 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:52:11.228688 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:52:11.232239 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:52:11.241095 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:52:11.244486 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:52:11.253441 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:52:11.277269 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:52:11.299096 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:52:11.320223 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:52:11.341633 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:52:11.362868 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:52:11.386598 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:52:11.408609 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:52:11.430516 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:52:11.452312 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:52:11.474971 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:52:11.496336 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:52:11.511579 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:52:11.526436 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:52:11.541220 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:52:11.556734 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:52:11.573710 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:52:11.589103 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:52:11.604809 1221070 ssh_runner.go:195] Run: openssl version
	I0414 14:52:11.610110 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:52:11.620147 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.624394 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.624454 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.629850 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:52:11.639862 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:52:11.649796 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.653828 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.653894 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.659174 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:52:11.669032 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:52:11.678764 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.682817 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.682885 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.688098 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:52:11.697831 1221070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:52:11.701550 1221070 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:52:11.701601 1221070 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:52:11.701691 1221070 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:52:11.701720 1221070 kube-vip.go:115] generating kube-vip config ...
	I0414 14:52:11.701774 1221070 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:52:11.717854 1221070 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:52:11.717951 1221070 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:52:11.718009 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:52:11.727618 1221070 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:52:11.727676 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0414 14:52:11.736203 1221070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0414 14:52:11.751774 1221070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:52:11.768120 1221070 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:52:11.783489 1221070 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:52:11.787006 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:52:11.798424 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:11.903985 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:52:11.921547 1221070 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:52:11.921874 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:11.923383 1221070 out.go:177] * Verifying Kubernetes components...
	I0414 14:52:11.924548 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:12.079718 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:52:12.096131 1221070 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:52:12.096280 1221070 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0414 14:52:12.096344 1221070 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I0414 14:52:12.096629 1221070 node_ready.go:35] waiting up to 6m0s for node "ha-290859-m02" to be "Ready" ...
	I0414 14:52:12.096770 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:12.096778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:12.096786 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:12.096792 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:12.105014 1221070 round_trippers.go:581] Response Status: 404 Not Found in 8 milliseconds
	I0414 14:52:12.596840 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:12.596864 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:12.596873 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:12.596878 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:12.599193 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:13.096896 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:13.096921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:13.096930 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:13.096935 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:13.099008 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:13.597788 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:13.597813 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:13.597822 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:13.597826 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:13.600141 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:14.097364 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:14.097390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:14.097398 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:14.097401 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:14.099682 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:14.099822 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:14.597362 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:14.597390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:14.597401 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:14.597407 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:14.599923 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:15.096865 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:15.096890 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:15.096898 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:15.096903 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:15.099533 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:15.597246 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:15.597272 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:15.597280 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:15.597285 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:15.599591 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.096978 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:16.097005 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:16.097014 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:16.097019 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:16.099644 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.597351 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:16.597377 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:16.597385 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:16.597389 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:16.599794 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.599885 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:17.097583 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:17.097609 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:17.097621 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:17.097630 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:17.099987 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:17.597752 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:17.597777 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:17.597792 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:17.597798 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:17.599966 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.097796 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:18.097830 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:18.097843 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:18.097850 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:18.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.597881 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:18.597906 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:18.597918 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:18.597923 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:18.600349 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.600437 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:19.097732 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:19.097758 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:19.097766 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:19.097772 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:19.100346 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:19.597034 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:19.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:19.597074 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:19.597081 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:19.600054 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:20.097051 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:20.097075 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:20.097085 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:20.097091 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:20.099439 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:20.597189 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:20.597218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:20.597230 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:20.597234 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:20.599635 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:21.097052 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:21.097078 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:21.097090 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:21.097095 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:21.099916 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:21.100012 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:21.597682 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:21.597708 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:21.597716 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:21.597722 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:21.600175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:22.097764 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:22.097789 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:22.097798 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:22.097803 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:22.100278 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:22.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:22.597008 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:22.597017 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:22.597021 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:22.599616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.097388 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:23.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:23.097423 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:23.097428 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:23.099818 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:23.597655 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:23.597664 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:23.597669 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:23.600007 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.600102 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:24.097112 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:24.097137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:24.097147 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:24.097151 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:24.099644 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:24.597329 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:24.597355 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:24.597363 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:24.597369 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:24.599961 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:25.096893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:25.096919 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:25.096928 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:25.096934 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:25.098708 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:52:25.597473 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:25.597500 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:25.597509 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:25.597514 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:25.600056 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:25.600156 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:26.097355 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:26.097378 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:26.097387 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:26.097391 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:26.099832 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:26.597648 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:26.597673 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:26.597684 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:26.597687 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:26.600271 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:27.096929 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:27.096954 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:27.096963 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:27.096967 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:27.099168 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:27.596858 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:27.596884 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:27.596893 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:27.596899 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:27.599457 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:28.096940 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:28.096964 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:28.096972 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:28.097006 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:28.099432 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:28.099546 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:28.597101 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:28.597126 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:28.597135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:28.597140 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:28.599552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:29.097020 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:29.097048 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:29.097060 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:29.097067 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:29.099638 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:29.597365 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:29.597391 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:29.597399 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:29.597405 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:29.599700 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:30.097686 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:30.097711 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:30.097720 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:30.097726 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:30.099828 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:30.099939 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:30.597659 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:30.597687 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:30.597696 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:30.597701 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:30.600246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:31.097571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:31.097595 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:31.097603 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:31.097608 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:31.100169 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:31.597822 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:31.597851 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:31.597861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:31.597870 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:31.600466 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.097138 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:32.097164 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:32.097173 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:32.097177 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:32.099723 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.597477 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:32.597503 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:32.597511 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:32.597515 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:32.599830 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.599932 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:33.097613 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:33.097641 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:33.097649 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:33.097654 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:33.099925 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:33.597289 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:33.597314 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:33.597323 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:33.597327 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:33.599654 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:34.096888 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:34.096919 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:34.096927 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:34.096933 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:34.099431 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:34.596955 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:34.596980 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:34.596989 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:34.596993 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:34.599335 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:35.097100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:35.097123 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:35.097131 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:35.097137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:35.099289 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:35.099382 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:35.596984 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:35.597012 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:35.597021 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:35.597025 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:35.599385 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:36.097705 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:36.097729 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:36.097738 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:36.097743 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:36.100126 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:36.597126 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:36.597155 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:36.597165 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:36.597169 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:36.600643 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:37.097395 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:37.097421 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:37.097430 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:37.097434 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:37.099784 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:37.099868 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:37.597613 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:37.597644 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:37.597653 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:37.597658 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:37.599841 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:38.097708 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:38.097734 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:38.097743 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:38.097746 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:38.100373 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:38.597097 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:38.597124 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:38.597132 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:38.597137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:38.599858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:39.097386 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:39.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:39.097422 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:39.097428 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:39.099969 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:39.100071 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:39.597770 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:39.597797 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:39.597806 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:39.597811 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:39.600350 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:40.097448 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:40.097473 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:40.097482 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:40.097487 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:40.099992 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:40.597766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:40.597794 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:40.597802 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:40.597807 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:40.600235 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:41.097595 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:41.097620 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:41.097628 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:41.097633 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:41.100188 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:41.100291 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:41.597223 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:41.597251 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:41.597259 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:41.597264 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:41.599796 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:42.097539 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:42.097565 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:42.097574 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:42.097578 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:42.099998 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:42.596849 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:42.596874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:42.596882 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:42.596886 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:42.599276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.097056 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:43.097082 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:43.097091 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:43.097095 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:43.099531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.597247 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:43.597271 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:43.597279 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:43.597283 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:43.599641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.599742 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:44.097877 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:44.097905 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:44.097916 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:44.097922 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:44.100517 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:44.597248 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:44.597278 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:44.597286 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:44.597290 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:44.599800 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.097824 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:45.097852 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:45.097861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:45.097865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:45.100105 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.597856 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:45.597883 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:45.597892 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:45.597898 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:45.600432 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.600532 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:46.097855 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:46.097880 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:46.097888 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:46.097891 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:46.100551 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:46.597726 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:46.597754 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:46.597767 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:46.597772 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:46.600401 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:47.097070 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:47.097095 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:47.097104 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:47.097108 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:47.102860 1221070 round_trippers.go:581] Response Status: 404 Not Found in 5 milliseconds
	I0414 14:52:47.597648 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:47.597673 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:47.597682 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:47.597686 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:47.600174 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:48.096965 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:48.096990 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:48.096998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:48.097002 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:48.099639 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:48.099731 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:48.597371 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:48.597405 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:48.597416 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:48.597421 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:48.599718 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:49.097094 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:49.097133 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:49.097142 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:49.097145 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:49.099888 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:49.597678 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:49.597705 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:49.597713 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:49.597718 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:49.600370 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:50.097228 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:50.097253 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:50.097261 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:50.097266 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:50.100034 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:50.100119 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:50.597914 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:50.597948 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:50.597961 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:50.597967 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:50.601343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:51.097653 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:51.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:51.097690 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:51.097694 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:51.100291 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:51.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:51.597656 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:51.597667 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:51.597675 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:51.606437 1221070 round_trippers.go:581] Response Status: 404 Not Found in 8 milliseconds
	I0414 14:52:52.097142 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:52.097174 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:52.097186 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:52.097203 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:52.100953 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:52.101053 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:52.597793 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:52.597822 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:52.597836 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:52.597844 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:52.600495 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:53.097203 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:53.097229 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:53.097238 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:53.097242 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:53.099616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:53.597366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:53.597390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:53.597399 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:53.597404 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:53.599831 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.097057 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:54.097083 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:54.097092 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:54.097096 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:54.099423 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.596995 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:54.597022 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:54.597031 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:54.597042 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:54.599588 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.599693 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:55.097848 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:55.097874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:55.097882 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:55.097887 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:55.100242 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:55.597035 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:55.597062 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:55.597072 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:55.597077 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:55.599583 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.096912 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:56.096939 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:56.096948 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:56.096952 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:56.099376 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.597699 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:56.597725 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:56.597734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:56.597739 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:56.600266 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.600543 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:57.097172 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:57.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:57.097209 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:57.097215 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:57.099784 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:57.597610 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:57.597642 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:57.597655 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:57.597663 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:57.599863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.097691 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:58.097721 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:58.097734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:58.097740 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:58.100041 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.597837 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:58.597862 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:58.597870 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:58.597875 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:58.600624 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.600730 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:59.096948 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:59.096975 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:59.096984 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:59.096989 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:59.099096 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:59.597907 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:59.597935 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:59.597947 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:59.597953 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:59.600401 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:00.097602 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:00.097627 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:00.097636 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:00.097641 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:00.099750 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:00.597486 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:00.597512 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:00.597522 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:00.597527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:00.599885 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:01.097325 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:01.097358 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:01.097371 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:01.097391 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:01.099717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:01.099833 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:01.596958 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:01.596983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:01.596992 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:01.596997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:01.599356 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:02.097071 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:02.097122 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:02.097131 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:02.097138 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:02.099343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:02.597036 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:02.597063 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:02.597071 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:02.597075 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:02.599771 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:03.097565 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:03.097592 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:03.097600 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:03.097604 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:03.099792 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:03.099897 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:03.597552 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:03.597585 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:03.597595 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:03.597599 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:03.600018 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:04.096976 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:04.097001 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:04.097009 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:04.097013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:04.099528 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:04.597239 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:04.597267 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:04.597276 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:04.597283 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:04.599533 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:05.097665 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:05.097691 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:05.097699 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:05.097703 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:05.100338 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:05.100439 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:05.597081 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:05.597106 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:05.597116 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:05.597121 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:05.600398 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:06.097630 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:06.097656 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:06.097665 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:06.097670 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:06.100398 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:06.597714 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:06.597739 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:06.597748 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:06.597752 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:06.600470 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:07.097213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:07.097240 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:07.097250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:07.097253 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:07.099963 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:07.597789 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:07.597816 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:07.597826 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:07.597831 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:07.600855 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:07.600957 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:08.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:08.097701 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:08.097710 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:08.097715 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:08.100645 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:08.597358 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:08.597384 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:08.597393 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:08.597397 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:08.599788 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:09.097393 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:09.097420 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:09.097429 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:09.097434 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:09.099924 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:09.597707 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:09.597732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:09.597742 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:09.597747 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:09.599970 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:10.097178 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:10.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:10.097216 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:10.097221 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:10.099537 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:10.099624 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:10.597236 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:10.597263 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:10.597271 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:10.597275 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:10.599552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:11.097961 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:11.097993 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:11.098008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:11.098016 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:11.100563 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:11.597756 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:11.597782 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:11.597790 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:11.597795 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:11.600339 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:12.097054 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:12.097083 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:12.097093 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:12.097099 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:12.099641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:12.099739 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:12.597376 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:12.597402 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:12.597411 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:12.597417 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:12.599658 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:13.097459 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:13.097484 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:13.097492 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:13.097502 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:13.099810 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:13.597571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:13.597596 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:13.597605 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:13.597609 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:13.600010 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.096947 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:14.096970 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:14.096979 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:14.096990 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:14.099343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.597063 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:14.597091 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:14.597101 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:14.597105 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:14.599641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.599723 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:15.097631 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:15.097658 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:15.097668 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:15.097682 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:15.100287 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:15.597176 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:15.597202 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:15.597211 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:15.597215 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:15.599531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:16.097711 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:16.097732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:16.097742 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:16.097746 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:16.101211 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:16.597571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:16.597597 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:16.597606 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:16.597610 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:16.599963 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:16.600075 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:17.097758 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:17.097783 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:17.097792 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:17.097796 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:17.099932 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:17.597691 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:17.597718 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:17.597727 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:17.597733 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:17.600352 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:18.097050 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:18.097078 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:18.097089 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:18.097096 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:18.099428 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:18.597110 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:18.597145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:18.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:18.597166 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:18.599600 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:19.096963 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:19.096987 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:19.096998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:19.097003 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:19.099491 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:19.099580 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:19.597231 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:19.597263 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:19.597276 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:19.597283 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:19.600009 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:20.096886 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:20.096914 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:20.096926 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:20.096932 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:20.099209 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:20.596960 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:20.596986 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:20.596998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:20.597004 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:20.599960 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.097055 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:21.097077 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:21.097088 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:21.097094 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:21.099402 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.597633 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:21.597662 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:21.597674 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:21.597680 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:21.599894 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.600006 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:22.097732 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:22.097762 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:22.097774 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:22.097782 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:22.100319 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:22.597118 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:22.597146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:22.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:22.597163 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:22.599684 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.097462 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:23.097495 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:23.097507 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:23.097513 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:23.100099 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.597914 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:23.597944 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:23.597953 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:23.597959 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:23.600364 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.600532 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:24.097607 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:24.097632 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:24.097640 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:24.097644 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:24.100185 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:24.596899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:24.596940 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:24.596951 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:24.596957 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:24.599633 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:25.097761 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:25.097789 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:25.097803 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:25.097808 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:25.100205 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:25.596931 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:25.596958 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:25.596969 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:25.596974 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:25.599583 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:26.097899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:26.097925 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:26.097934 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:26.097938 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:26.100330 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:26.100425 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:26.597539 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:26.597566 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:26.597575 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:26.597580 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:26.600215 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:27.096966 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:27.096998 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:27.097007 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:27.097012 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:27.099631 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:27.597574 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:27.597600 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:27.597607 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:27.597612 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:27.599913 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:28.097869 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:28.097894 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:28.097903 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:28.097906 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:28.100382 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:28.100477 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:28.597225 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:28.597254 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:28.597263 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:28.597269 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:28.599684 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:29.097190 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:29.097218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:29.097229 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:29.097262 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:29.099744 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:29.597605 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:29.597634 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:29.597645 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:29.597652 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:29.600430 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:30.097442 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:30.097468 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:30.097476 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:30.097480 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:30.099457 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:53:30.597276 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:30.597303 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:30.597312 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:30.597316 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:30.599873 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:30.599951 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:31.097106 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:31.097144 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:31.097153 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:31.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:31.099513 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:31.597757 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:31.597783 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:31.597794 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:31.597798 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:31.600463 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:32.097182 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:32.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:32.097215 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:32.097219 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:32.099765 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:32.597512 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:32.597537 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:32.597546 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:32.597551 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:32.599820 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:33.097643 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:33.097666 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:33.097674 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:33.097678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:33.099796 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:33.099884 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:33.597718 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:33.597746 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:33.597755 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:33.597765 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:33.600269 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:34.097517 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:34.097544 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:34.097553 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:34.097558 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:34.100747 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:34.597531 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:34.597558 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:34.597567 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:34.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:34.599907 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:35.097832 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:35.097857 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:35.097869 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:35.097875 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:35.100197 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:35.100304 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:35.596881 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:35.596909 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:35.596918 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:35.596921 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:35.599227 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:36.097506 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:36.097528 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:36.097537 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:36.097541 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:36.099779 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:36.597044 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:36.597075 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:36.597086 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:36.597090 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:36.599704 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:37.097488 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:37.097512 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:37.097521 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:37.097527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:37.099413 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:53:37.596959 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:37.596985 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:37.596994 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:37.596998 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:37.599807 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:37.599901 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:38.097637 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:38.097663 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:38.097673 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:38.097678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:38.100336 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:38.597075 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:38.597101 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:38.597110 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:38.597115 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:38.599545 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:39.097005 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:39.097031 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:39.097042 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:39.097047 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:39.099289 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:39.596971 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:39.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:39.597006 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:39.597011 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:39.599228 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:40.097179 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:40.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:40.097215 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:40.097221 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:40.099966 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:40.100061 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:40.597818 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:40.597844 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:40.597854 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:40.597859 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:40.600104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:41.097551 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:41.097574 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:41.097586 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:41.097593 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:41.099851 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:41.596971 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:41.596996 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:41.597005 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:41.597008 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:41.599346 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.097228 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:42.097253 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:42.097262 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:42.097268 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:42.099597 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.597496 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:42.597522 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:42.597537 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:42.597542 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:42.599923 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.600028 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:43.097893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:43.097928 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:43.097940 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:43.097946 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:43.100249 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:43.597079 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:43.597103 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:43.597111 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:43.597115 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:43.599554 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:44.097935 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:44.097963 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:44.097972 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:44.097978 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:44.100650 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:44.597578 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:44.597602 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:44.597611 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:44.597615 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:44.599830 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:45.097892 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:45.097932 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:45.097940 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:45.097960 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:45.100091 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:45.100177 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:45.596937 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:45.596965 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:45.596975 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:45.596982 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:45.599620 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:46.097332 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:46.097359 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:46.097367 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:46.097373 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:46.099777 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:46.597031 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:46.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:46.597068 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:46.597075 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:46.599403 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:47.097731 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:47.097757 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:47.097766 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:47.097769 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:47.100280 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:47.100377 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:47.597123 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:47.597151 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:47.597170 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:47.597175 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:47.599534 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:48.097336 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:48.097361 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:48.097370 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:48.097374 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:48.099675 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:48.597501 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:48.597534 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:48.597547 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:48.597560 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:48.600236 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.097710 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:49.097738 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:49.097747 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:49.097750 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:49.100057 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.596902 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:49.596926 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:49.596935 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:49.596941 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:49.599460 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.599564 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:50.097595 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:50.097620 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:50.097629 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:50.097633 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:50.099825 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:50.597754 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:50.597780 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:50.597789 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:50.597793 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:50.600075 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.097870 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:51.097899 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:51.097909 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:51.097929 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:51.100654 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.596969 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:51.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:51.597006 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:51.597010 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:51.599564 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.599659 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:52.097262 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:52.097289 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:52.097297 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:52.097302 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:52.099885 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:52.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:52.597649 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:52.597657 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:52.597662 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:52.600287 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.097029 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:53.097056 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:53.097064 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:53.097070 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:53.100094 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.597857 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:53.597883 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:53.597892 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:53.597896 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:53.600381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.600486 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:54.097694 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:54.097720 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:54.097733 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:54.097739 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:54.100246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:54.596985 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:54.597015 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:54.597024 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:54.597029 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:54.599531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:55.097645 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:55.097670 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:55.097678 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:55.097682 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:55.100175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:55.596893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:55.596928 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:55.596937 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:55.596942 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:55.599467 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:56.097332 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:56.097359 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:56.097367 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:56.097372 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:56.099838 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:56.099935 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:56.597119 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:56.597143 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:56.597152 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:56.597156 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:56.599329 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:57.097196 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:57.097223 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:57.097233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:57.097238 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:57.099869 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:57.597766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:57.597794 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:57.597806 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:57.597810 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:57.600130 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.096957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:58.096983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:58.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:58.096999 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:58.099238 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.597087 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:58.597112 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:58.597126 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:58.597132 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:58.599330 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.599420 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:59.097878 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:59.097909 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:59.097921 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:59.097927 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:59.100274 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:59.597081 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:59.597111 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:59.597122 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:59.597127 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:59.599692 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:00.097700 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:00.097709 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:00.097712 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:00.100091 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.597900 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:00.597929 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:00.597940 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:00.597946 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:00.600276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.600373 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:01.097002 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:01.097028 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:01.097036 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:01.097042 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:01.099132 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:01.597696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:01.597720 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:01.597729 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:01.597734 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:01.600078 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:02.096932 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:02.096958 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:02.096966 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:02.096971 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:02.099544 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:02.597385 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:02.597411 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:02.597419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:02.597424 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:02.599758 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:03.097724 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:03.097751 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:03.097759 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:03.097763 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:03.099959 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:03.100080 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:03.596849 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:03.596874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:03.596883 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:03.596887 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:03.599335 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:04.097559 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:04.097583 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:04.097591 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:04.097596 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:04.099995 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:04.597777 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:04.597812 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:04.597832 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:04.597838 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:04.600226 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.097053 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:05.097079 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:05.097088 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:05.097092 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:05.099413 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.597132 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:05.597157 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:05.597175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:05.597181 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:05.599523 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.599615 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:06.097257 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:06.097285 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:06.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:06.097298 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:06.099686 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:06.597194 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:06.597218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:06.597233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:06.597237 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:06.599753 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:07.097514 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:07.097540 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:07.097548 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:07.097555 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:07.100208 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:07.596890 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:07.596917 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:07.596926 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:07.596929 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:07.599139 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:08.096999 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:08.097025 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:08.097034 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:08.097038 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:08.099440 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:08.099538 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:08.597199 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:08.597225 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:08.597233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:08.597236 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:08.599496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:09.096957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:09.096982 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:09.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:09.096995 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:09.099328 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:09.597143 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:09.597166 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:09.597175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:09.597187 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:09.599350 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:10.097206 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:10.097231 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:10.097240 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:10.097243 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:10.099687 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:10.099779 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:10.597576 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:10.597599 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:10.597608 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:10.597613 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:10.599844 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:11.097696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:11.097722 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:11.097730 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:11.097735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:11.100237 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:11.597785 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:11.597807 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:11.597816 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:11.597823 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:11.600490 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.097100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:12.097126 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:12.097135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:12.097140 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:12.099612 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.597382 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:12.597416 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:12.597430 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:12.597439 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:12.599678 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.599758 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:13.097501 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:13.097526 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:13.097535 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:13.097540 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:13.099917 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:13.597744 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:13.597770 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:13.597779 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:13.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:13.600202 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:14.097453 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:14.097481 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:14.097491 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:14.097495 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:14.100217 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:14.596880 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:14.596907 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:14.596916 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:14.596921 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:14.599285 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:15.097175 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:15.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:15.097209 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:15.097212 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:15.099276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:15.099364 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:15.597074 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:15.597108 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:15.597120 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:15.597125 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:15.599444 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:16.097331 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:16.097360 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:16.097373 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:16.097383 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:16.099711 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:16.597474 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:16.597502 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:16.597512 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:16.597517 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:16.599821 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:17.097721 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:17.097747 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:17.097762 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:17.097768 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:17.100198 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:17.100276 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:17.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:17.597006 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:17.597014 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:17.597018 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:17.599367 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:18.097273 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:18.097299 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:18.097310 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:18.097314 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:18.099609 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:18.597568 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:18.597593 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:18.597602 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:18.597606 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:18.600731 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:54:19.097140 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:19.097166 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:19.097175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:19.097180 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:19.099397 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:19.597213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:19.597238 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:19.597247 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:19.597252 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:19.599471 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:19.599566 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:20.097477 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:20.097502 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:20.097511 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:20.097515 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:20.099861 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:20.597797 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:20.597825 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:20.597837 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:20.597845 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:20.600174 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.097026 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:21.097053 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:21.097066 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:21.097072 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:21.099500 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.597281 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:21.597304 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:21.597313 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:21.597317 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:21.599496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.599588 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:22.097325 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:22.097355 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:22.097366 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:22.097370 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:22.099812 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:22.597762 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:22.597792 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:22.597804 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:22.597817 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:22.599813 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:54:23.097828 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:23.097858 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:23.097871 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:23.097881 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:23.100396 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:23.597213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:23.597241 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:23.597252 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:23.597258 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:23.599717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:23.599796 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:24.096996 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:24.097021 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:24.097049 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:24.097055 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:24.099311 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:24.597126 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:24.597149 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:24.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:24.597162 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:24.599602 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:25.097695 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:25.097703 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:25.097710 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:25.099822 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.597641 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:25.597667 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:25.597675 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:25.597678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:25.600012 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.600100 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:26.097816 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:26.097842 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:26.097850 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:26.097854 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:26.100489 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:26.597097 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:26.597122 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:26.597132 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:26.597137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:26.599865 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:27.097687 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:27.097714 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:27.097723 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:27.097728 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:27.100355 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:27.597087 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:27.597111 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:27.597124 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:27.597128 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:27.599434 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:28.097160 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:28.097192 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:28.097200 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:28.097205 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:28.099497 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:28.099582 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:28.597237 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:28.597261 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:28.597272 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:28.597278 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:28.599694 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:29.097091 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:29.097118 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:29.097127 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:29.097132 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:29.099540 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:29.597363 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:29.597392 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:29.597405 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:29.597411 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:29.600172 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:30.097121 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:30.097144 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:30.097153 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:30.097157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:30.099513 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:30.099612 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:30.597347 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:30.597371 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:30.597380 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:30.597384 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:30.600156 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:31.096952 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:31.096988 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:31.096997 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:31.097001 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:31.099465 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:31.597116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:31.597143 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:31.597153 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:31.597158 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:31.599567 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:32.097317 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:32.097346 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:32.097358 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:32.097365 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:32.099660 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:32.099757 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:32.597405 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:32.597430 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:32.597439 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:32.597441 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:32.599811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:33.097627 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:33.097653 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:33.097662 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:33.097667 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:33.099982 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:33.597753 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:33.597778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:33.597787 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:33.597792 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:33.600559 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:34.097871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:34.097899 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:34.097912 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:34.097919 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:34.100469 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:34.100556 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:34.597193 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:34.597217 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:34.597226 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:34.597232 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:34.600162 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:35.097109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:35.097135 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:35.097144 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:35.097149 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:35.099576 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:35.597285 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:35.597313 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:35.597326 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:35.597333 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:35.599938 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.096921 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:36.096946 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:36.096954 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:36.096959 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:36.099227 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.597866 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:36.597904 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:36.597913 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:36.597919 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:36.600354 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.600463 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:37.097063 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:37.097090 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:37.097100 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:37.097105 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:37.099379 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:37.597122 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:37.597146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:37.597154 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:37.597158 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:37.599519 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.097366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:38.097393 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:38.097408 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:38.097414 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:38.099965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.597915 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:38.597940 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:38.597949 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:38.597954 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:38.600572 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.600660 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:39.097060 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:39.097087 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:39.097096 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:39.097101 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:39.099507 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:39.597337 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:39.597362 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:39.597371 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:39.597375 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:39.599715 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:40.097688 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:40.097713 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:40.097724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:40.097729 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:40.100033 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:40.596909 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:40.596939 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:40.596951 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:40.596957 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:40.599175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:41.097072 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:41.097099 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:41.097107 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:41.097111 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:41.099460 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:41.099539 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:41.597139 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:41.597165 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:41.597174 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:41.597178 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:41.599709 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:42.097560 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:42.097587 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:42.097595 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:42.097600 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:42.099863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:42.597812 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:42.597845 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:42.597862 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:42.597870 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:42.600230 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:43.096959 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:43.096985 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:43.096994 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:43.096999 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:43.099603 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:43.099685 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:43.597369 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:43.597397 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:43.597407 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:43.597412 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:43.599491 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:44.097845 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:44.097872 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:44.097882 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:44.097886 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:44.100129 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:44.597908 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:44.597935 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:44.597944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:44.597949 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:44.600197 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.097116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:45.097145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:45.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:45.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:45.099461 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.597363 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:45.597392 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:45.597403 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:45.597408 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:45.599811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.599899 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:46.097776 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:46.097801 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:46.097809 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:46.097814 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:46.100355 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:46.597079 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:46.597104 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:46.597112 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:46.597118 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:46.599632 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.097368 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:47.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:47.097423 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:47.097427 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:47.099773 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.597600 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:47.597624 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:47.597632 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:47.597637 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:47.600105 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.600192 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:48.096873 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:48.096905 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:48.096921 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:48.096927 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:48.099178 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:48.596912 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:48.596938 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:48.596945 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:48.596952 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:48.599004 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.097608 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:49.097631 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:49.097641 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:49.097645 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:49.099908 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.597696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:49.597722 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:49.597730 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:49.597735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:49.600131 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.600216 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:50.097068 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:50.097094 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:50.097103 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:50.097108 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:50.099234 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:50.596970 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:50.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:50.597008 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:50.597012 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:50.599499 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.097376 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:51.097404 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:51.097433 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:51.097437 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:51.099811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.597585 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:51.597611 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:51.597620 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:51.597624 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:51.600264 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.600359 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:52.097120 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:52.097146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:52.097155 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:52.097159 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:52.100007 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:52.596856 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:52.596893 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:52.596902 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:52.596908 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:52.599385 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:53.097209 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:53.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:53.097245 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:53.097249 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:53.099552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:53.597353 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:53.597378 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:53.597387 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:53.597396 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:53.599946 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:54.097385 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:54.097410 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:54.097419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:54.097425 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:54.099753 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:54.099849 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:54.597114 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:54.597140 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:54.597152 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:54.597159 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:54.599304 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:55.097077 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:55.097101 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:55.097109 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:55.097116 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:55.099594 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:55.597394 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:55.597430 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:55.597443 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:55.597448 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:55.599922 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:56.097857 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:56.097882 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:56.097891 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:56.097896 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:56.099961 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:56.100052 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:56.597806 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:56.597832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:56.597841 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:56.597846 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:56.600303 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:57.097159 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:57.097187 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:57.097195 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:57.097200 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:57.099508 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:57.597505 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:57.597532 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:57.597541 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:57.597545 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:57.600204 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.097048 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:58.097074 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:58.097082 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:58.097086 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:58.099381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.597205 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:58.597230 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:58.597239 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:58.597245 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:58.599451 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.599546 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:59.097886 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:59.097918 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:59.097931 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:59.097939 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:59.100163 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:59.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:59.597010 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:59.597021 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:59.597026 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:59.599059 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:00.097066 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:00.097091 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:00.097103 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:00.097109 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:00.099359 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:00.597072 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:00.597098 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:00.597107 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:00.597113 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:00.599230 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:01.096958 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:01.096983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:01.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:01.096997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:01.099098 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:01.099184 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:01.596893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:01.596921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:01.596933 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:01.596939 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:01.599452 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:02.097155 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:02.097182 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:02.097191 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:02.097197 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:02.099208 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:02.596931 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:02.596957 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:02.596968 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:02.596973 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:02.598907 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:03.097709 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:03.097736 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:03.097744 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:03.097749 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:03.100088 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:03.100185 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:03.597905 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:03.597933 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:03.597944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:03.597949 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:03.600246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:04.097651 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:04.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:04.097687 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:04.097693 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:04.100045 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:04.597839 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:04.597876 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:04.597885 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:04.597890 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:04.600163 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.097176 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:05.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:05.097210 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:05.097214 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:05.099624 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.597323 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:05.597350 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:05.597360 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:05.597365 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:05.599598 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.599695 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:06.097552 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:06.097582 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:06.097591 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:06.097595 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:06.099900 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:06.597946 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:06.597974 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:06.597982 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:06.597988 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:06.600426 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:07.097279 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:07.097306 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:07.097315 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:07.097320 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:07.099371 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:07.597212 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:07.597236 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:07.597245 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:07.597250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:07.599340 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:08.097240 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:08.097274 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:08.097289 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:08.097296 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:08.099717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:08.099814 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:08.597662 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:08.597688 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:08.597697 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:08.597702 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:08.599709 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:09.097250 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:09.097278 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:09.097289 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:09.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:09.099634 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:09.597565 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:09.597589 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:09.597598 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:09.597603 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:09.599920 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.097101 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:10.097125 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:10.097136 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:10.097141 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:10.099632 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.597582 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:10.597608 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:10.597617 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:10.597623 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:10.599909 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.600015 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:11.097848 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:11.097875 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:11.097884 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:11.097889 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:11.100388 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:11.597033 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:11.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:11.597068 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:11.597073 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:11.599446 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:12.097209 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:12.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:12.097246 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:12.097251 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:12.099596 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:12.597381 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:12.597409 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:12.597419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:12.597425 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:12.599739 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:13.097653 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:13.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:13.097694 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:13.097698 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:13.100085 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:13.100162 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:13.596932 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:13.596960 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:13.596970 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:13.596976 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:13.599364 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:14.097757 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:14.097784 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:14.097793 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:14.097799 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:14.100496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:14.597210 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:14.597235 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:14.597244 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:14.597248 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:14.599610 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:15.097782 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:15.097807 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:15.097819 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:15.097824 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:15.101005 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:55:15.101098 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:15.597806 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:15.597832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:15.597841 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:15.597844 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:15.600361 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:16.097098 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:16.097124 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:16.097133 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:16.097138 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:16.099616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:16.597475 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:16.597501 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:16.597509 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:16.597514 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:16.599989 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.097804 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:17.097832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:17.097842 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:17.097849 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:17.100125 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.597891 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:17.597921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:17.597930 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:17.597934 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:17.600307 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.600400 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:18.097041 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:18.097068 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:18.097076 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:18.097082 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:18.099561 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:18.597301 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:18.597328 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:18.597337 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:18.597341 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:18.599635 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:19.097188 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:19.097214 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:19.097223 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:19.097228 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:19.099493 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:19.597192 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:19.597215 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:19.597224 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:19.597229 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:19.599599 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:20.097639 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:20.097663 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:20.097671 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:20.097675 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:20.099803 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:20.099912 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:20.597725 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:20.597750 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:20.597759 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:20.597764 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:20.600274 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:21.097135 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:21.097164 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:21.097173 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:21.097178 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:21.099615 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:21.597251 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:21.597300 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:21.597309 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:21.597313 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:21.599653 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.097498 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:22.097523 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:22.097536 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:22.097542 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:22.099623 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.597528 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:22.597557 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:22.597565 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:22.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:22.599837 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.599933 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:23.097809 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:23.097835 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:23.097846 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:23.097851 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:23.099889 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:23.597818 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:23.597845 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:23.597858 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:23.597865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:23.599919 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.097248 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:24.097280 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:24.097293 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:24.097299 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:24.099650 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.597564 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:24.597589 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:24.597598 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:24.597603 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:24.600076 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.600182 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:25.097211 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:25.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:25.097246 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:25.097250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:25.099737 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:25.597673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:25.597700 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:25.597711 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:25.597718 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:25.600363 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:26.097116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:26.097145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:26.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:26.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:26.099408 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:26.597105 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:26.597133 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:26.597142 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:26.597147 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:26.599718 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:27.097532 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:27.097559 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:27.097569 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:27.097573 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:27.100132 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:27.100234 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:27.596843 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:27.596866 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:27.596875 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:27.596880 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:27.598858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:28.097716 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:28.097744 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:28.097752 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:28.097759 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:28.100226 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:28.596972 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:28.596999 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:28.597008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:28.597013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:28.599202 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:29.097781 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:29.097804 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:29.097814 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:29.097819 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:29.100259 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:29.100355 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:29.596974 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:29.597007 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:29.597018 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:29.597023 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:29.599234 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:30.097347 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:30.097369 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:30.097379 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:30.097384 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:30.099858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:30.597703 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:30.597732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:30.597742 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:30.597747 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:30.600213 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.096866 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:31.096894 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:31.096910 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:31.096925 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:31.098999 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.596844 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:31.596869 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:31.596877 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:31.596881 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:31.599416 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.599520 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:32.097294 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:32.097320 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:32.097329 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:32.097334 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:32.099664 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:32.597534 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:32.597562 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:32.597573 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:32.597581 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:32.599997 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.097885 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:33.097913 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:33.097925 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:33.097933 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:33.100424 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.597212 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:33.597245 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:33.597256 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:33.597261 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:33.599737 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.599825 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:34.096946 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:34.096977 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:34.096990 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:34.096997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:34.099325 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:34.597051 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:34.597077 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:34.597088 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:34.597094 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:34.599638 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:35.097797 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:35.097822 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:35.097832 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:35.097839 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:35.100270 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:35.597109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:35.597137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:35.597145 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:35.597150 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:35.599542 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:36.097465 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:36.097491 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:36.097500 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:36.097505 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:36.100187 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:36.100290 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:36.596906 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:36.596932 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:36.596944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:36.596950 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:36.599839 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:37.097766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:37.097792 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:37.097801 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:37.097807 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:37.099951 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:37.597950 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:37.597979 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:37.597989 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:37.597993 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:37.600410 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.097271 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:38.097298 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:38.097306 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:38.097311 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:38.099663 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.597601 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:38.597627 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:38.597636 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:38.597647 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:38.600447 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.600553 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:39.097748 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:39.097775 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:39.097786 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:39.097794 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:39.100150 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:39.596990 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:39.597019 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:39.597028 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:39.597032 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:39.599406 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:40.097366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:40.097396 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:40.097409 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:40.097416 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:40.099965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:40.597743 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:40.597771 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:40.597780 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:40.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:40.600273 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:41.096973 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:41.096997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:41.097006 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:41.097013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:41.099218 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:41.099337 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:41.596871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:41.596897 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:41.596908 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:41.596913 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:41.599017 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:42.097855 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:42.097889 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:42.097899 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:42.097905 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:42.101284 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:55:42.596957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:42.596996 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:42.597008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:42.597016 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:42.599231 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:43.097007 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:43.097034 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:43.097046 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:43.097051 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:43.099362 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:43.099452 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:43.597120 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:43.597147 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:43.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:43.597164 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:43.599396 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:44.097698 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:44.097725 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:44.097734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:44.097738 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:44.099914 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:44.597690 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:44.597715 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:44.597724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:44.597729 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:44.600159 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.097089 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:45.097112 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:45.097121 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:45.097125 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:45.099361 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.596975 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:45.597002 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:45.597010 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:45.597014 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:45.599569 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.599649 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:46.097457 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:46.097483 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:46.097492 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:46.097497 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:46.099821 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:46.597701 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:46.597727 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:46.597735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:46.597739 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:46.600275 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.097117 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:47.097141 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:47.097150 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:47.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:47.099568 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.597488 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:47.597514 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:47.597522 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:47.597527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:47.599944 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.600100 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:48.096867 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:48.096892 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:48.096908 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:48.096911 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:48.099730 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:48.597476 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:48.597506 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:48.597514 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:48.597520 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:48.599790 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:49.097193 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:49.097219 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:49.097228 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:49.097231 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:49.099213 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:49.596898 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:49.596923 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:49.596931 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:49.596935 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:49.599211 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:50.097588 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:50.097612 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:50.097622 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:50.097626 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:50.099587 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:50.099671 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:50.597293 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:50.597326 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:50.597335 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:50.597346 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:50.599755 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:51.097570 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:51.097599 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:51.097608 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:51.097613 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:51.100622 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:51.597436 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:51.597463 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:51.597472 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:51.597477 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:51.599799 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:52.097594 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:52.097621 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:52.097631 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:52.097635 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:52.100149 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:52.100239 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:52.596871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:52.596917 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:52.596927 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:52.596932 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:52.598861 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:53.097658 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:53.097687 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:53.097695 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:53.097701 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:53.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:53.597899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:53.597931 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:53.597939 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:53.597944 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:53.600381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:54.097688 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:54.097715 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:54.097724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:54.097728 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:54.100282 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:54.100365 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:54.597098 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:54.597127 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:54.597135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:54.597139 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:54.599447 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:55.097620 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:55.097648 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:55.097658 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:55.097663 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:55.100052 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:55.596920 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:55.596949 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:55.596957 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:55.596964 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:55.599399 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.097258 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:56.097285 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:56.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:56.097300 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:56.099626 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.597512 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:56.597537 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:56.597546 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:56.597550 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:56.599780 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.599862 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:57.097715 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:57.097744 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:57.097753 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:57.097758 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:57.100249 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:57.597037 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:57.597065 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:57.597073 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:57.597079 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:57.599410 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.097243 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:58.097271 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:58.097281 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:58.097286 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:58.099805 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.597743 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:58.597775 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:58.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:58.597791 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:58.599981 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.600099 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:59.097525 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:59.097554 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:59.097563 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:59.097567 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:59.100128 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:59.596950 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:59.596975 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:59.596983 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:59.596987 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:59.599509 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:00.097582 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:00.097606 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:00.097615 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:00.097620 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:00.099878 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:00.597634 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:00.597660 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:00.597669 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:00.597673 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:00.599960 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:01.097755 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:01.097779 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:01.097788 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:01.097793 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:01.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:01.100191 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:01.597749 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:01.597778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:01.597789 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:01.597799 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:01.600379 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:02.097127 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:02.097163 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:02.097172 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:02.097179 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:02.099347 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:02.597084 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:02.597114 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:02.597122 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:02.597126 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:02.599484 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.097203 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:03.097229 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:03.097244 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:03.097249 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:03.099750 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.597532 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:03.597557 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:03.597565 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:03.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:03.599887 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.599994 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:04.097156 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:04.097182 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:04.097193 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:04.097202 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:04.099543 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:04.597391 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:04.597422 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:04.597434 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:04.597441 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:04.599613 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:05.097696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:05.097719 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:05.097727 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:05.097733 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:05.101649 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:56:05.597340 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:05.597364 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:05.597373 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:05.597379 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:05.599888 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:05.600026 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:06.097634 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:06.097659 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:06.097668 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:06.097672 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:06.099863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:06.597652 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:06.597686 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:06.597701 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:06.597707 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:06.599965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:07.097782 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:07.097812 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:07.097825 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:07.097833 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:07.100367 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:07.597100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:07.597132 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:07.597144 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:07.597151 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:07.599359 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:08.097183 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:08.097225 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:08.097240 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:08.097248 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:08.099618 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:08.099711 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:08.597331 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:08.597358 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:08.597370 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:08.597377 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:08.599820 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:09.097223 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:09.097254 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:09.097264 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:09.097268 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:09.099655 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:09.597538 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:09.597562 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:09.597570 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:09.597576 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:09.599815 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:10.097831 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:10.097853 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:10.097861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:10.097865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:10.100242 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:10.100337 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:10.597109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:10.597137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:10.597146 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:10.597152 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:10.600167 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:11.097037 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:11.097061 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:11.097070 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:11.097076 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:11.099474 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:11.597114 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:11.597141 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:11.597150 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:11.597155 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:11.599707 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:12.097023 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:12.097048 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:12.097056 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:12.097061 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:12.099277 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:12.099371 1221070 node_ready.go:38] duration metric: took 4m0.002706246s for node "ha-290859-m02" to be "Ready" ...
	I0414 14:56:12.101227 1221070 out.go:201] 
	W0414 14:56:12.102352 1221070 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0414 14:56:12.102371 1221070 out.go:270] * 
	* 
	W0414 14:56:12.103364 1221070 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:56:12.104737 1221070 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-290859 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-290859
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.680454465s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:40 UTC | 14 Apr 25 14:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node stop m02 -v=7         | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node start m02 -v=7        | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:43 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-290859 -v=7               | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:48 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-290859 -v=7                    | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:48 UTC | 14 Apr 25 14:51 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-290859 --wait=true -v=7        | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:51 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-290859                    | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:56 UTC |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:51:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:51:24.924385 1221070 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:51:24.924621 1221070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:51:24.924629 1221070 out.go:358] Setting ErrFile to fd 2...
	I0414 14:51:24.924633 1221070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:51:24.924808 1221070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:51:24.925345 1221070 out.go:352] Setting JSON to false
	I0414 14:51:24.926340 1221070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":23628,"bootTime":1744618657,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:51:24.926457 1221070 start.go:139] virtualization: kvm guest
	I0414 14:51:24.928287 1221070 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:51:24.929459 1221070 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:51:24.929469 1221070 notify.go:220] Checking for updates...
	I0414 14:51:24.931737 1221070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:51:24.933068 1221070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:24.934102 1221070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:51:24.935103 1221070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:51:24.936089 1221070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:51:24.937496 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:24.937602 1221070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:51:24.938128 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:24.938198 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:24.954244 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0414 14:51:24.954880 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:24.955464 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:24.955489 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:24.955900 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:24.956117 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:24.990242 1221070 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:51:24.991319 1221070 start.go:297] selected driver: kvm2
	I0414 14:51:24.991332 1221070 start.go:901] validating driver "kvm2" against &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-29
0859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:24.991491 1221070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:51:24.991827 1221070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:51:24.991902 1221070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:51:25.007424 1221070 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:51:25.008082 1221070 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:51:25.008124 1221070 cni.go:84] Creating CNI manager for ""
	I0414 14:51:25.008189 1221070 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0414 14:51:25.008244 1221070 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fal
se kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:25.008400 1221070 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:51:25.010019 1221070 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:51:25.011347 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:25.011399 1221070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:51:25.011409 1221070 cache.go:56] Caching tarball of preloaded images
	I0414 14:51:25.011488 1221070 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:51:25.011498 1221070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:51:25.011617 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:25.011799 1221070 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:51:25.011840 1221070 start.go:364] duration metric: took 23.649µs to acquireMachinesLock for "ha-290859"
	I0414 14:51:25.011855 1221070 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:51:25.011862 1221070 fix.go:54] fixHost starting: 
	I0414 14:51:25.012121 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:25.012156 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:25.026599 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0414 14:51:25.027122 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:25.027660 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:25.027688 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:25.028011 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:25.028229 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:25.028380 1221070 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:51:25.030231 1221070 fix.go:112] recreateIfNeeded on ha-290859: state=Stopped err=<nil>
	I0414 14:51:25.030265 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	W0414 14:51:25.030457 1221070 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:51:25.032663 1221070 out.go:177] * Restarting existing kvm2 VM for "ha-290859" ...
	I0414 14:51:25.033815 1221070 main.go:141] libmachine: (ha-290859) Calling .Start
	I0414 14:51:25.034026 1221070 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:51:25.034048 1221070 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:51:25.034729 1221070 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:51:25.035067 1221070 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:51:25.035424 1221070 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:51:25.036088 1221070 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:51:26.234459 1221070 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:51:26.235587 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.236072 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.236210 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.236086 1221099 retry.go:31] will retry after 280.740636ms: waiting for domain to come up
	I0414 14:51:26.518687 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.519197 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.519215 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.519169 1221099 retry.go:31] will retry after 243.427688ms: waiting for domain to come up
	I0414 14:51:26.765118 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.765534 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.765582 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.765501 1221099 retry.go:31] will retry after 427.840973ms: waiting for domain to come up
	I0414 14:51:27.195132 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:27.195585 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:27.195651 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:27.195569 1221099 retry.go:31] will retry after 469.259994ms: waiting for domain to come up
	I0414 14:51:27.666308 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:27.666685 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:27.666712 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:27.666664 1221099 retry.go:31] will retry after 657.912219ms: waiting for domain to come up
	I0414 14:51:28.326528 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:28.326927 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:28.326955 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:28.326878 1221099 retry.go:31] will retry after 750.684746ms: waiting for domain to come up
	I0414 14:51:29.078742 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:29.079136 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:29.079161 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:29.079097 1221099 retry.go:31] will retry after 1.04198738s: waiting for domain to come up
	I0414 14:51:30.122400 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:30.122774 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:30.122798 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:30.122735 1221099 retry.go:31] will retry after 1.397183101s: waiting for domain to come up
	I0414 14:51:31.522268 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:31.522683 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:31.522709 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:31.522652 1221099 retry.go:31] will retry after 1.778850774s: waiting for domain to come up
	I0414 14:51:33.303491 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:33.303831 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:33.303859 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:33.303809 1221099 retry.go:31] will retry after 2.116605484s: waiting for domain to come up
	I0414 14:51:35.422345 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:35.422804 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:35.422863 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:35.422810 1221099 retry.go:31] will retry after 2.695384495s: waiting for domain to come up
	I0414 14:51:38.120436 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:38.120841 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:38.120862 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:38.120804 1221099 retry.go:31] will retry after 2.291586599s: waiting for domain to come up
	I0414 14:51:40.414425 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:40.414781 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:40.414804 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:40.414750 1221099 retry.go:31] will retry after 4.202133346s: waiting for domain to come up
	I0414 14:51:44.622185 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.622671 1221070 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:51:44.622701 1221070 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:51:44.622714 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.623272 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.623307 1221070 main.go:141] libmachine: (ha-290859) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"}
	I0414 14:51:44.623333 1221070 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:51:44.623346 1221070 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:51:44.623353 1221070 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:51:44.625584 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.625894 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.625919 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.626118 1221070 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:51:44.626160 1221070 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:51:44.626206 1221070 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:51:44.626228 1221070 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:51:44.626236 1221070 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:51:44.746948 1221070 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:51:44.747341 1221070 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:51:44.748066 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:44.750502 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.750990 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.751020 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.751318 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:44.751530 1221070 machine.go:93] provisionDockerMachine start ...
	I0414 14:51:44.751557 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:44.751774 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.754154 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.754523 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.754549 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.754732 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.754917 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.755086 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.755209 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.755372 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.755592 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.755609 1221070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:51:44.859385 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:51:44.859420 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:44.859703 1221070 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:51:44.859733 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:44.859976 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.862591 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.862947 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.862982 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.863100 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.863336 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.863508 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.863682 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.863853 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.864206 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.864235 1221070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:51:44.980307 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:51:44.980345 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.983477 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.983889 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.983935 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.984061 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.984280 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.984453 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.984640 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.984799 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.985038 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.985053 1221070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:51:45.095107 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:51:45.095137 1221070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:51:45.095159 1221070 buildroot.go:174] setting up certificates
	I0414 14:51:45.095170 1221070 provision.go:84] configureAuth start
	I0414 14:51:45.095189 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:45.095535 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:45.098271 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.098658 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.098683 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.098857 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.101319 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.101590 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.101614 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.101756 1221070 provision.go:143] copyHostCerts
	I0414 14:51:45.101791 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:51:45.101823 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:51:45.101841 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:51:45.101907 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:51:45.101983 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:51:45.102001 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:51:45.102007 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:51:45.102032 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:51:45.102075 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:51:45.102097 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:51:45.102103 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:51:45.102122 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:51:45.102165 1221070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:51:45.257877 1221070 provision.go:177] copyRemoteCerts
	I0414 14:51:45.257960 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:51:45.257996 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.261081 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.261410 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.261440 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.261666 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.261911 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.262125 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.262285 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.340876 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:51:45.340975 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0414 14:51:45.362634 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:51:45.362694 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:51:45.383617 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:51:45.383700 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:51:45.404718 1221070 provision.go:87] duration metric: took 309.531359ms to configureAuth
	I0414 14:51:45.404750 1221070 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:51:45.405030 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:45.405049 1221070 machine.go:96] duration metric: took 653.506288ms to provisionDockerMachine
	I0414 14:51:45.405057 1221070 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:51:45.405066 1221070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:51:45.405099 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.405452 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:51:45.405481 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.408299 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.408642 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.408670 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.408811 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.408995 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.409115 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.409248 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.489101 1221070 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:51:45.493122 1221070 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:51:45.493155 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:51:45.493230 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:51:45.493340 1221070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:51:45.493354 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:51:45.493471 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:51:45.502327 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:51:45.523422 1221070 start.go:296] duration metric: took 118.348669ms for postStartSetup
	I0414 14:51:45.523473 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.523812 1221070 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:51:45.523846 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.526608 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.526952 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.526984 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.527122 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.527317 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.527485 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.527636 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.609005 1221070 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:51:45.609116 1221070 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:51:45.667143 1221070 fix.go:56] duration metric: took 20.655266779s for fixHost
	I0414 14:51:45.667202 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.670139 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.670591 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.670620 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.670836 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.671137 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.671338 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.671522 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.671692 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:45.671935 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:45.671948 1221070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:51:45.775787 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642305.752586107
	
	I0414 14:51:45.775819 1221070 fix.go:216] guest clock: 1744642305.752586107
	I0414 14:51:45.775848 1221070 fix.go:229] Guest: 2025-04-14 14:51:45.752586107 +0000 UTC Remote: 2025-04-14 14:51:45.667180128 +0000 UTC m=+20.782398303 (delta=85.405979ms)
	I0414 14:51:45.775882 1221070 fix.go:200] guest clock delta is within tolerance: 85.405979ms
	I0414 14:51:45.775900 1221070 start.go:83] releasing machines lock for "ha-290859", held for 20.764045917s
	I0414 14:51:45.775923 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.776216 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:45.778889 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.779306 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.779339 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.779531 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780063 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780265 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780372 1221070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:51:45.780417 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.780527 1221070 ssh_runner.go:195] Run: cat /version.json
	I0414 14:51:45.780554 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.783291 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783315 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783676 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.783718 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783821 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.783864 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.783889 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.784002 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.784123 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.784177 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.784299 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.784385 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.784475 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.784588 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.860084 1221070 ssh_runner.go:195] Run: systemctl --version
	I0414 14:51:45.888174 1221070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:51:45.893495 1221070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:51:45.893571 1221070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:51:45.908348 1221070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:51:45.908375 1221070 start.go:495] detecting cgroup driver to use...
	I0414 14:51:45.908446 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:51:45.935942 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:51:45.948409 1221070 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:51:45.948475 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:51:45.960942 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:51:45.974488 1221070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:51:46.086503 1221070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:51:46.230317 1221070 docker.go:233] disabling docker service ...
	I0414 14:51:46.230381 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:51:46.244297 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:51:46.256626 1221070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:51:46.408783 1221070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:51:46.531425 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:51:46.544279 1221070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:51:46.561206 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:51:46.570536 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:51:46.579933 1221070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:51:46.579987 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:51:46.589083 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:51:46.598516 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:51:46.608502 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:51:46.618260 1221070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:51:46.628002 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:51:46.637979 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:51:46.647708 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:51:46.657465 1221070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:51:46.666456 1221070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:51:46.666506 1221070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:51:46.679179 1221070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:51:46.688058 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:51:46.803994 1221070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:51:46.830741 1221070 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:51:46.830851 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:51:46.834666 1221070 retry.go:31] will retry after 684.331118ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:51:47.519413 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:51:47.524753 1221070 start.go:563] Will wait 60s for crictl version
	I0414 14:51:47.524814 1221070 ssh_runner.go:195] Run: which crictl
	I0414 14:51:47.528401 1221070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:51:47.567610 1221070 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:51:47.567684 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:51:47.592654 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:51:47.616410 1221070 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:51:47.617662 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:47.620124 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:47.620497 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:47.620523 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:47.620761 1221070 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:51:47.624661 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:51:47.636875 1221070 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:51:47.637062 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:47.637127 1221070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:51:47.668962 1221070 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:51:47.668993 1221070 containerd.go:534] Images already preloaded, skipping extraction
	I0414 14:51:47.669051 1221070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:51:47.700719 1221070 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:51:47.700748 1221070 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:51:47.700756 1221070 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:51:47.700911 1221070 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:51:47.701015 1221070 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:51:47.733009 1221070 cni.go:84] Creating CNI manager for ""
	I0414 14:51:47.733034 1221070 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0414 14:51:47.733058 1221070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:51:47.733086 1221070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:51:47.733246 1221070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:51:47.733266 1221070 kube-vip.go:115] generating kube-vip config ...
	I0414 14:51:47.733322 1221070 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:51:47.749704 1221070 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:51:47.749841 1221070 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:51:47.749916 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:51:47.759441 1221070 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:51:47.759517 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:51:47.768745 1221070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:51:47.784598 1221070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:51:47.800512 1221070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:51:47.816194 1221070 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:51:47.832579 1221070 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:51:47.836561 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:51:47.848464 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:51:47.961061 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:51:47.977110 1221070 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:51:47.977148 1221070 certs.go:194] generating shared ca certs ...
	I0414 14:51:47.977165 1221070 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:47.977358 1221070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:51:47.977426 1221070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:51:47.977447 1221070 certs.go:256] generating profile certs ...
	I0414 14:51:47.977567 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:51:47.977595 1221070 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d
	I0414 14:51:47.977626 1221070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:51:48.116172 1221070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d ...
	I0414 14:51:48.116203 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d: {Name:mk9edc6f7524dc9ba3b3dee538c59fbd77ccd148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.116397 1221070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d ...
	I0414 14:51:48.116412 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d: {Name:mk18dc0fd4ba99bfeaa95fae1a08a91f3d1054da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.116516 1221070 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:51:48.116679 1221070 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:51:48.116822 1221070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:51:48.116845 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:51:48.116863 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:51:48.116876 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:51:48.116888 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:51:48.116898 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:51:48.116907 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:51:48.116916 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:51:48.116925 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:51:48.116971 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:51:48.117008 1221070 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:51:48.117018 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:51:48.117040 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:51:48.117066 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:51:48.117086 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:51:48.117120 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:51:48.117150 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.117163 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.117173 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.117829 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:51:48.149051 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:51:48.177053 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:51:48.209173 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:51:48.253240 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:51:48.287575 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:51:48.318676 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:51:48.341473 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:51:48.364366 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:51:48.392240 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:51:48.414262 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:51:48.435434 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:51:48.451391 1221070 ssh_runner.go:195] Run: openssl version
	I0414 14:51:48.456643 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:51:48.467055 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.471094 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.471167 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.476620 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:51:48.487041 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:51:48.497119 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.501253 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.501303 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.506464 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:51:48.516670 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:51:48.526675 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.530724 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.530790 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.536779 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:51:48.547496 1221070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:51:48.551752 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 14:51:48.557436 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 14:51:48.563312 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 14:51:48.569039 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 14:51:48.575033 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 14:51:48.580579 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 14:51:48.586320 1221070 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:48.586432 1221070 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:51:48.586516 1221070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:51:48.621007 1221070 cri.go:89] found id: "731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0"
	I0414 14:51:48.621036 1221070 cri.go:89] found id: "0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f"
	I0414 14:51:48.621043 1221070 cri.go:89] found id: "922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b"
	I0414 14:51:48.621047 1221070 cri.go:89] found id: "2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d"
	I0414 14:51:48.621051 1221070 cri.go:89] found id: "e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2"
	I0414 14:51:48.621056 1221070 cri.go:89] found id: "9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d"
	I0414 14:51:48.621059 1221070 cri.go:89] found id: "8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847"
	I0414 14:51:48.621063 1221070 cri.go:89] found id: "3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3"
	I0414 14:51:48.621066 1221070 cri.go:89] found id: "b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c"
	I0414 14:51:48.621076 1221070 cri.go:89] found id: "341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b"
	I0414 14:51:48.621080 1221070 cri.go:89] found id: ""
	I0414 14:51:48.621136 1221070 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0414 14:51:48.634683 1221070 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:51:48Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0414 14:51:48.634779 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:51:48.644649 1221070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 14:51:48.644668 1221070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 14:51:48.644716 1221070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 14:51:48.653466 1221070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:51:48.653918 1221070 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-290859" does not appear in /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.654026 1221070 kubeconfig.go:62] /home/jenkins/minikube-integration/20512-1196368/kubeconfig needs updating (will repair): [kubeconfig missing "ha-290859" cluster setting kubeconfig missing "ha-290859" context setting]
	I0414 14:51:48.654307 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.654727 1221070 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.654871 1221070 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.110:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:51:48.655325 1221070 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:51:48.655343 1221070 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:51:48.655349 1221070 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:51:48.655355 1221070 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:51:48.655383 1221070 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:51:48.655782 1221070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 14:51:48.666379 1221070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0414 14:51:48.666416 1221070 kubeadm.go:597] duration metric: took 21.742146ms to restartPrimaryControlPlane
	I0414 14:51:48.666430 1221070 kubeadm.go:394] duration metric: took 80.118757ms to StartCluster
	I0414 14:51:48.666454 1221070 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.666542 1221070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.667357 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.667681 1221070 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:51:48.667715 1221070 start.go:241] waiting for startup goroutines ...
	I0414 14:51:48.667737 1221070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:51:48.667972 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:48.670730 1221070 out.go:177] * Enabled addons: 
	I0414 14:51:48.671774 1221070 addons.go:514] duration metric: took 4.043718ms for enable addons: enabled=[]
	I0414 14:51:48.671816 1221070 start.go:246] waiting for cluster config update ...
	I0414 14:51:48.671833 1221070 start.go:255] writing updated cluster config ...
	I0414 14:51:48.673542 1221070 out.go:201] 
	I0414 14:51:48.674918 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:48.675012 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:48.676439 1221070 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:51:48.677470 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:48.677501 1221070 cache.go:56] Caching tarball of preloaded images
	I0414 14:51:48.677610 1221070 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:51:48.677625 1221070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:51:48.677734 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:48.677945 1221070 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:51:48.677999 1221070 start.go:364] duration metric: took 29.352µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:51:48.678015 1221070 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:51:48.678023 1221070 fix.go:54] fixHost starting: m02
	I0414 14:51:48.678300 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:48.678338 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:48.694625 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46149
	I0414 14:51:48.695133 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:48.695644 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:48.695672 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:48.696059 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:48.696257 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:51:48.696396 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:51:48.697918 1221070 fix.go:112] recreateIfNeeded on ha-290859-m02: state=Stopped err=<nil>
	I0414 14:51:48.697944 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	W0414 14:51:48.698147 1221070 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:51:48.699709 1221070 out.go:177] * Restarting existing kvm2 VM for "ha-290859-m02" ...
	I0414 14:51:48.700791 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .Start
	I0414 14:51:48.701016 1221070 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:51:48.701037 1221070 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:51:48.701680 1221070 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:51:48.701964 1221070 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:51:48.702320 1221070 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:51:48.703123 1221070 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:51:49.928511 1221070 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:51:49.929302 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:49.929682 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:49.929753 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:49.929668 1221256 retry.go:31] will retry after 213.167481ms: waiting for domain to come up
	I0414 14:51:50.144304 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.144886 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.144914 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.144841 1221256 retry.go:31] will retry after 331.221156ms: waiting for domain to come up
	I0414 14:51:50.477450 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.477938 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.477993 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.477923 1221256 retry.go:31] will retry after 310.58732ms: waiting for domain to come up
	I0414 14:51:50.790523 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.791165 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.791199 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.791085 1221256 retry.go:31] will retry after 545.346683ms: waiting for domain to come up
	I0414 14:51:51.337935 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:51.338399 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:51.338425 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:51.338357 1221256 retry.go:31] will retry after 756.05518ms: waiting for domain to come up
	I0414 14:51:52.096242 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:52.096695 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:52.096730 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:52.096648 1221256 retry.go:31] will retry after 823.090094ms: waiting for domain to come up
	I0414 14:51:52.921657 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:52.922142 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:52.922184 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:52.922101 1221256 retry.go:31] will retry after 970.69668ms: waiting for domain to come up
	I0414 14:51:53.894927 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:53.895561 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:53.895594 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:53.895517 1221256 retry.go:31] will retry after 1.032622919s: waiting for domain to come up
	I0414 14:51:54.929442 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:54.929927 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:54.929952 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:54.929923 1221256 retry.go:31] will retry after 1.334812207s: waiting for domain to come up
	I0414 14:51:56.266967 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:56.267482 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:56.267510 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:56.267455 1221256 retry.go:31] will retry after 1.510894415s: waiting for domain to come up
	I0414 14:51:57.780426 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:57.780971 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:57.781004 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:57.780920 1221256 retry.go:31] will retry after 2.39467668s: waiting for domain to come up
	I0414 14:52:00.177702 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:00.178090 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:52:00.178121 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:52:00.178065 1221256 retry.go:31] will retry after 3.552625428s: waiting for domain to come up
	I0414 14:52:03.732281 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:03.732786 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:52:03.732838 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:52:03.732762 1221256 retry.go:31] will retry after 4.321714949s: waiting for domain to come up
	I0414 14:52:08.057427 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.057990 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.058015 1221070 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:52:08.058030 1221070 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:52:08.058568 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.058598 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"}
	I0414 14:52:08.058616 1221070 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:52:08.058624 1221070 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:52:08.058632 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:52:08.061480 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.061822 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.061855 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.062002 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:52:08.062025 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:52:08.062058 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:52:08.062073 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:52:08.062084 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:52:08.183207 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:52:08.183609 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:52:08.184236 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:08.186802 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.187282 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.187322 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.187609 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:52:08.187825 1221070 machine.go:93] provisionDockerMachine start ...
	I0414 14:52:08.187846 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.188131 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.190391 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.190830 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.190855 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.191024 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.191211 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.191410 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.191557 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.191706 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.192061 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.192080 1221070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:52:08.291480 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:52:08.291525 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.291906 1221070 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:52:08.291946 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.292200 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.295446 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.295895 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.295926 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.296203 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.296433 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.296612 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.296787 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.297073 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.297293 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.297305 1221070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:52:08.410482 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:52:08.410517 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.413198 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.413585 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.413621 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.413794 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.414028 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.414223 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.414369 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.414529 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.414731 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.414746 1221070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:52:08.522305 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:52:08.522338 1221070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:52:08.522355 1221070 buildroot.go:174] setting up certificates
	I0414 14:52:08.522368 1221070 provision.go:84] configureAuth start
	I0414 14:52:08.522377 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.522678 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:08.525718 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.526180 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.526208 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.526396 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.528768 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.529141 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.529174 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.529288 1221070 provision.go:143] copyHostCerts
	I0414 14:52:08.529323 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:52:08.529356 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:52:08.529364 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:52:08.529418 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:52:08.529544 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:52:08.529566 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:52:08.529571 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:52:08.529594 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:52:08.529638 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:52:08.529656 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:52:08.529663 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:52:08.529681 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:52:08.529727 1221070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:52:08.556497 1221070 provision.go:177] copyRemoteCerts
	I0414 14:52:08.556548 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:52:08.556569 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.559078 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.559480 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.559504 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.559685 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.559875 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.560067 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.560219 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.637398 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:52:08.637469 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:52:08.661142 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:52:08.661219 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:52:08.683109 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:52:08.683191 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 14:52:08.705705 1221070 provision.go:87] duration metric: took 183.321321ms to configureAuth
	I0414 14:52:08.705738 1221070 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:52:08.706026 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:08.706045 1221070 machine.go:96] duration metric: took 518.207609ms to provisionDockerMachine
	I0414 14:52:08.706054 1221070 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:52:08.706063 1221070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:52:08.706087 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.706363 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:52:08.706392 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.709099 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.709429 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.709457 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.709689 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.709903 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.710118 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.710263 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.791281 1221070 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:52:08.795310 1221070 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:52:08.795344 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:52:08.795409 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:52:08.795482 1221070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:52:08.795492 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:52:08.795570 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:52:08.806018 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:52:08.828791 1221070 start.go:296] duration metric: took 122.715902ms for postStartSetup
	I0414 14:52:08.828841 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.829192 1221070 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:52:08.829225 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.832093 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.832474 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.832500 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.832687 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.832874 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.833046 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.833191 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.914136 1221070 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:52:08.914227 1221070 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:52:08.970338 1221070 fix.go:56] duration metric: took 20.292306098s for fixHost
	I0414 14:52:08.970422 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.973148 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.973612 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.973662 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.973866 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.974071 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.974273 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.974383 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.974544 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.974752 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.974761 1221070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:52:09.075896 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642329.038020711
	
	I0414 14:52:09.075916 1221070 fix.go:216] guest clock: 1744642329.038020711
	I0414 14:52:09.075924 1221070 fix.go:229] Guest: 2025-04-14 14:52:09.038020711 +0000 UTC Remote: 2025-04-14 14:52:08.970369466 +0000 UTC m=+44.085587632 (delta=67.651245ms)
	I0414 14:52:09.075939 1221070 fix.go:200] guest clock delta is within tolerance: 67.651245ms
	I0414 14:52:09.075944 1221070 start.go:83] releasing machines lock for "ha-290859-m02", held for 20.397936123s
	I0414 14:52:09.075962 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.076232 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:09.079036 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.079425 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.079456 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.081479 1221070 out.go:177] * Found network options:
	I0414 14:52:09.082752 1221070 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:52:09.084044 1221070 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:52:09.084079 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084689 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084887 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084984 1221070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:52:09.085023 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:52:09.085117 1221070 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:52:09.085206 1221070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:52:09.085232 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:09.088187 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088476 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088613 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.088643 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088794 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:09.088903 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.088928 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088974 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:09.089083 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:09.089161 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:09.089227 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:09.089297 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:09.089336 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:09.089483 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:52:09.194292 1221070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:52:09.194439 1221070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:52:09.211568 1221070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:52:09.211600 1221070 start.go:495] detecting cgroup driver to use...
	I0414 14:52:09.211684 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:52:09.239355 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:52:09.252164 1221070 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:52:09.252247 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:52:09.266619 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:52:09.279466 1221070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:52:09.408504 1221070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:52:09.554621 1221070 docker.go:233] disabling docker service ...
	I0414 14:52:09.554705 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:52:09.567849 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:52:09.579882 1221070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:52:09.691627 1221070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:52:09.801979 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:52:09.824437 1221070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:52:09.841408 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:52:09.851062 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:52:09.860777 1221070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:52:09.860826 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:52:09.870133 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:52:09.879955 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:52:09.889567 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:52:09.899405 1221070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:52:09.909754 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:52:09.919673 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:52:09.929572 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:52:09.939053 1221070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:52:09.947490 1221070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:52:09.947546 1221070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:52:09.959627 1221070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:52:09.968379 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:10.086027 1221070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:52:10.118333 1221070 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:52:10.118430 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:52:10.122969 1221070 retry.go:31] will retry after 818.918333ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:52:10.943062 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:52:10.948132 1221070 start.go:563] Will wait 60s for crictl version
	I0414 14:52:10.948196 1221070 ssh_runner.go:195] Run: which crictl
	I0414 14:52:10.952231 1221070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:52:10.988005 1221070 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:52:10.988097 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:52:11.012963 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:52:11.038206 1221070 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:52:11.039588 1221070 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:52:11.040724 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:11.043716 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:11.044108 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:11.044129 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:11.044384 1221070 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:52:11.048381 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:52:11.060281 1221070 mustload.go:65] Loading cluster: ha-290859
	I0414 14:52:11.060535 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:11.060920 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:52:11.060972 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:52:11.076673 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0414 14:52:11.077200 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:52:11.077672 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:52:11.077694 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:52:11.078067 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:52:11.078244 1221070 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:52:11.079808 1221070 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:52:11.080127 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:52:11.080174 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:52:11.095417 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37849
	I0414 14:52:11.095844 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:52:11.096258 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:52:11.096277 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:52:11.096639 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:52:11.096826 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:52:11.096989 1221070 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:52:11.097003 1221070 certs.go:194] generating shared ca certs ...
	I0414 14:52:11.097029 1221070 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:52:11.097193 1221070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:52:11.097269 1221070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:52:11.097285 1221070 certs.go:256] generating profile certs ...
	I0414 14:52:11.097381 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:52:11.097463 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:52:11.097524 1221070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:52:11.097538 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:52:11.097560 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:52:11.097577 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:52:11.097593 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:52:11.097611 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:52:11.097629 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:52:11.097646 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:52:11.097662 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:52:11.097724 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:52:11.097762 1221070 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:52:11.097777 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:52:11.097809 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:52:11.097839 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:52:11.097866 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:52:11.097945 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:52:11.097992 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.098014 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.098038 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.098070 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:52:11.100966 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:52:11.101386 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:52:11.101405 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:52:11.101550 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:52:11.101731 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:52:11.101862 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:52:11.102010 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:52:11.175602 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:52:11.180006 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:52:11.189968 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:52:11.193728 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:52:11.203099 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:52:11.207009 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:52:11.216071 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:52:11.219518 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:52:11.228688 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:52:11.232239 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:52:11.241095 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:52:11.244486 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:52:11.253441 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:52:11.277269 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:52:11.299096 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:52:11.320223 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:52:11.341633 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:52:11.362868 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:52:11.386598 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:52:11.408609 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:52:11.430516 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:52:11.452312 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:52:11.474971 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:52:11.496336 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:52:11.511579 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:52:11.526436 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:52:11.541220 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:52:11.556734 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:52:11.573710 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:52:11.589103 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:52:11.604809 1221070 ssh_runner.go:195] Run: openssl version
	I0414 14:52:11.610110 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:52:11.620147 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.624394 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.624454 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.629850 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:52:11.639862 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:52:11.649796 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.653828 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.653894 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.659174 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:52:11.669032 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:52:11.678764 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.682817 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.682885 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.688098 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:52:11.697831 1221070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:52:11.701550 1221070 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:52:11.701601 1221070 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:52:11.701691 1221070 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:52:11.701720 1221070 kube-vip.go:115] generating kube-vip config ...
	I0414 14:52:11.701774 1221070 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:52:11.717854 1221070 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:52:11.717951 1221070 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:52:11.718009 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:52:11.727618 1221070 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:52:11.727676 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0414 14:52:11.736203 1221070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0414 14:52:11.751774 1221070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:52:11.768120 1221070 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:52:11.783489 1221070 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:52:11.787006 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:52:11.798424 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:11.903985 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:52:11.921547 1221070 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:52:11.921874 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:11.923383 1221070 out.go:177] * Verifying Kubernetes components...
	I0414 14:52:11.924548 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:12.079718 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:52:12.096131 1221070 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:52:12.096280 1221070 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0414 14:52:12.096344 1221070 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I0414 14:52:12.096629 1221070 node_ready.go:35] waiting up to 6m0s for node "ha-290859-m02" to be "Ready" ...
	I0414 14:52:12.096770 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:12.096778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:12.096786 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:12.096792 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:12.105014 1221070 round_trippers.go:581] Response Status: 404 Not Found in 8 milliseconds
	I0414 14:52:12.596840 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:12.596864 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:12.596873 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:12.596878 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:12.599193 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:13.096896 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:13.096921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:13.096930 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:13.096935 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:13.099008 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:13.597788 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:13.597813 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:13.597822 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:13.597826 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:13.600141 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:14.097364 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:14.097390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:14.097398 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:14.097401 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:14.099682 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:14.099822 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:14.597362 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:14.597390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:14.597401 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:14.597407 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:14.599923 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:15.096865 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:15.096890 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:15.096898 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:15.096903 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:15.099533 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:15.597246 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:15.597272 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:15.597280 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:15.597285 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:15.599591 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.096978 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:16.097005 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:16.097014 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:16.097019 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:16.099644 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.597351 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:16.597377 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:16.597385 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:16.597389 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:16.599794 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.599885 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:17.097583 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:17.097609 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:17.097621 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:17.097630 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:17.099987 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:17.597752 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:17.597777 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:17.597792 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:17.597798 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:17.599966 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.097796 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:18.097830 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:18.097843 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:18.097850 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:18.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.597881 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:18.597906 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:18.597918 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:18.597923 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:18.600349 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.600437 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:19.097732 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:19.097758 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:19.097766 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:19.097772 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:19.100346 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:19.597034 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:19.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:19.597074 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:19.597081 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:19.600054 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:20.097051 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:20.097075 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:20.097085 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:20.097091 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:20.099439 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:20.597189 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:20.597218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:20.597230 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:20.597234 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:20.599635 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:21.097052 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:21.097078 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:21.097090 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:21.097095 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:21.099916 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:21.100012 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:21.597682 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:21.597708 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:21.597716 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:21.597722 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:21.600175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:22.097764 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:22.097789 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:22.097798 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:22.097803 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:22.100278 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:22.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:22.597008 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:22.597017 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:22.597021 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:22.599616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.097388 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:23.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:23.097423 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:23.097428 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:23.099818 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:23.597655 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:23.597664 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:23.597669 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:23.600007 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.600102 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:24.097112 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:24.097137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:24.097147 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:24.097151 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:24.099644 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:24.597329 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:24.597355 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:24.597363 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:24.597369 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:24.599961 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:25.096893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:25.096919 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:25.096928 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:25.096934 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:25.098708 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:52:25.597473 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:25.597500 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:25.597509 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:25.597514 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:25.600056 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:25.600156 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:26.097355 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:26.097378 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:26.097387 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:26.097391 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:26.099832 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:26.597648 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:26.597673 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:26.597684 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:26.597687 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:26.600271 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:27.096929 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:27.096954 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:27.096963 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:27.096967 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:27.099168 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:27.596858 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:27.596884 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:27.596893 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:27.596899 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:27.599457 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:28.096940 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:28.096964 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:28.096972 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:28.097006 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:28.099432 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:28.099546 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:28.597101 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:28.597126 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:28.597135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:28.597140 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:28.599552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:29.097020 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:29.097048 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:29.097060 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:29.097067 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:29.099638 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:29.597365 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:29.597391 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:29.597399 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:29.597405 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:29.599700 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:30.097686 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:30.097711 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:30.097720 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:30.097726 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:30.099828 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:30.099939 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:30.597659 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:30.597687 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:30.597696 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:30.597701 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:30.600246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:31.097571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:31.097595 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:31.097603 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:31.097608 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:31.100169 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:31.597822 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:31.597851 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:31.597861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:31.597870 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:31.600466 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.097138 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:32.097164 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:32.097173 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:32.097177 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:32.099723 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.597477 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:32.597503 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:32.597511 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:32.597515 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:32.599830 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.599932 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:33.097613 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:33.097641 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:33.097649 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:33.097654 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:33.099925 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:33.597289 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:33.597314 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:33.597323 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:33.597327 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:33.599654 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:34.096888 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:34.096919 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:34.096927 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:34.096933 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:34.099431 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:34.596955 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:34.596980 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:34.596989 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:34.596993 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:34.599335 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:35.097100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:35.097123 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:35.097131 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:35.097137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:35.099289 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:35.099382 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:35.596984 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:35.597012 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:35.597021 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:35.597025 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:35.599385 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:36.097705 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:36.097729 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:36.097738 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:36.097743 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:36.100126 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:36.597126 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:36.597155 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:36.597165 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:36.597169 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:36.600643 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:37.097395 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:37.097421 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:37.097430 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:37.097434 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:37.099784 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:37.099868 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:37.597613 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:37.597644 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:37.597653 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:37.597658 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:37.599841 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:38.097708 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:38.097734 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:38.097743 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:38.097746 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:38.100373 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:38.597097 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:38.597124 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:38.597132 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:38.597137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:38.599858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:39.097386 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:39.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:39.097422 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:39.097428 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:39.099969 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:39.100071 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:39.597770 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:39.597797 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:39.597806 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:39.597811 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:39.600350 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:40.097448 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:40.097473 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:40.097482 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:40.097487 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:40.099992 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:40.597766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:40.597794 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:40.597802 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:40.597807 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:40.600235 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:41.097595 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:41.097620 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:41.097628 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:41.097633 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:41.100188 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:41.100291 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:41.597223 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:41.597251 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:41.597259 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:41.597264 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:41.599796 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:42.097539 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:42.097565 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:42.097574 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:42.097578 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:42.099998 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:42.596849 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:42.596874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:42.596882 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:42.596886 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:42.599276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.097056 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:43.097082 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:43.097091 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:43.097095 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:43.099531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.597247 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:43.597271 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:43.597279 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:43.597283 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:43.599641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.599742 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:44.097877 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:44.097905 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:44.097916 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:44.097922 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:44.100517 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:44.597248 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:44.597278 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:44.597286 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:44.597290 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:44.599800 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.097824 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:45.097852 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:45.097861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:45.097865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:45.100105 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.597856 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:45.597883 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:45.597892 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:45.597898 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:45.600432 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.600532 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:46.097855 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:46.097880 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:46.097888 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:46.097891 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:46.100551 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:46.597726 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:46.597754 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:46.597767 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:46.597772 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:46.600401 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:47.097070 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:47.097095 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:47.097104 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:47.097108 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:47.102860 1221070 round_trippers.go:581] Response Status: 404 Not Found in 5 milliseconds
	I0414 14:52:47.597648 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:47.597673 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:47.597682 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:47.597686 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:47.600174 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:48.096965 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:48.096990 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:48.096998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:48.097002 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:48.099639 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:48.099731 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:48.597371 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:48.597405 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:48.597416 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:48.597421 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:48.599718 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:49.097094 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:49.097133 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:49.097142 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:49.097145 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:49.099888 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:49.597678 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:49.597705 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:49.597713 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:49.597718 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:49.600370 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:50.097228 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:50.097253 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:50.097261 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:50.097266 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:50.100034 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:50.100119 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:50.597914 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:50.597948 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:50.597961 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:50.597967 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:50.601343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:51.097653 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:51.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:51.097690 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:51.097694 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:51.100291 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:51.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:51.597656 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:51.597667 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:51.597675 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:51.606437 1221070 round_trippers.go:581] Response Status: 404 Not Found in 8 milliseconds
	I0414 14:52:52.097142 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:52.097174 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:52.097186 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:52.097203 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:52.100953 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:52.101053 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:52.597793 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:52.597822 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:52.597836 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:52.597844 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:52.600495 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:53.097203 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:53.097229 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:53.097238 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:53.097242 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:53.099616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:53.597366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:53.597390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:53.597399 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:53.597404 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:53.599831 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.097057 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:54.097083 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:54.097092 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:54.097096 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:54.099423 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.596995 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:54.597022 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:54.597031 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:54.597042 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:54.599588 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.599693 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:55.097848 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:55.097874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:55.097882 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:55.097887 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:55.100242 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:55.597035 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:55.597062 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:55.597072 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:55.597077 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:55.599583 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.096912 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:56.096939 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:56.096948 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:56.096952 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:56.099376 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.597699 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:56.597725 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:56.597734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:56.597739 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:56.600266 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.600543 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:57.097172 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:57.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:57.097209 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:57.097215 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:57.099784 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:57.597610 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:57.597642 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:57.597655 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:57.597663 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:57.599863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.097691 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:58.097721 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:58.097734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:58.097740 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:58.100041 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.597837 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:58.597862 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:58.597870 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:58.597875 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:58.600624 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.600730 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:59.096948 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:59.096975 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:59.096984 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:59.096989 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:59.099096 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:59.597907 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:59.597935 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:59.597947 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:59.597953 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:59.600401 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:00.097602 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:00.097627 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:00.097636 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:00.097641 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:00.099750 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:00.597486 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:00.597512 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:00.597522 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:00.597527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:00.599885 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:01.097325 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:01.097358 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:01.097371 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:01.097391 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:01.099717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:01.099833 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:01.596958 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:01.596983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:01.596992 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:01.596997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:01.599356 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:02.097071 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:02.097122 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:02.097131 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:02.097138 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:02.099343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:02.597036 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:02.597063 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:02.597071 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:02.597075 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:02.599771 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:03.097565 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:03.097592 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:03.097600 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:03.097604 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:03.099792 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:03.099897 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:03.597552 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:03.597585 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:03.597595 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:03.597599 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:03.600018 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:04.096976 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:04.097001 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:04.097009 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:04.097013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:04.099528 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:04.597239 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:04.597267 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:04.597276 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:04.597283 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:04.599533 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:05.097665 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:05.097691 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:05.097699 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:05.097703 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:05.100338 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:05.100439 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:05.597081 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:05.597106 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:05.597116 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:05.597121 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:05.600398 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:06.097630 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:06.097656 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:06.097665 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:06.097670 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:06.100398 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:06.597714 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:06.597739 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:06.597748 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:06.597752 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:06.600470 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:07.097213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:07.097240 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:07.097250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:07.097253 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:07.099963 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:07.597789 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:07.597816 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:07.597826 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:07.597831 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:07.600855 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:07.600957 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:08.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:08.097701 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:08.097710 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:08.097715 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:08.100645 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:08.597358 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:08.597384 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:08.597393 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:08.597397 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:08.599788 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:09.097393 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:09.097420 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:09.097429 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:09.097434 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:09.099924 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:09.597707 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:09.597732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:09.597742 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:09.597747 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:09.599970 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:10.097178 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:10.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:10.097216 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:10.097221 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:10.099537 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:10.099624 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:10.597236 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:10.597263 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:10.597271 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:10.597275 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:10.599552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:11.097961 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:11.097993 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:11.098008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:11.098016 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:11.100563 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:11.597756 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:11.597782 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:11.597790 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:11.597795 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:11.600339 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:12.097054 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:12.097083 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:12.097093 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:12.097099 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:12.099641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:12.099739 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:12.597376 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:12.597402 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:12.597411 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:12.597417 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:12.599658 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:13.097459 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:13.097484 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:13.097492 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:13.097502 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:13.099810 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:13.597571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:13.597596 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:13.597605 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:13.597609 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:13.600010 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.096947 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:14.096970 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:14.096979 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:14.096990 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:14.099343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.597063 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:14.597091 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:14.597101 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:14.597105 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:14.599641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.599723 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:15.097631 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:15.097658 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:15.097668 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:15.097682 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:15.100287 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:15.597176 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:15.597202 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:15.597211 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:15.597215 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:15.599531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:16.097711 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:16.097732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:16.097742 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:16.097746 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:16.101211 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:16.597571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:16.597597 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:16.597606 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:16.597610 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:16.599963 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:16.600075 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:17.097758 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:17.097783 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:17.097792 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:17.097796 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:17.099932 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:17.597691 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:17.597718 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:17.597727 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:17.597733 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:17.600352 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:18.097050 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:18.097078 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:18.097089 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:18.097096 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:18.099428 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:18.597110 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:18.597145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:18.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:18.597166 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:18.599600 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:19.096963 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:19.096987 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:19.096998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:19.097003 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:19.099491 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:19.099580 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:19.597231 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:19.597263 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:19.597276 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:19.597283 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:19.600009 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:20.096886 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:20.096914 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:20.096926 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:20.096932 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:20.099209 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:20.596960 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:20.596986 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:20.596998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:20.597004 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:20.599960 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.097055 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:21.097077 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:21.097088 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:21.097094 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:21.099402 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.597633 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:21.597662 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:21.597674 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:21.597680 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:21.599894 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.600006 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:22.097732 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:22.097762 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:22.097774 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:22.097782 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:22.100319 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:22.597118 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:22.597146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:22.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:22.597163 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:22.599684 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.097462 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:23.097495 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:23.097507 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:23.097513 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:23.100099 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.597914 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:23.597944 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:23.597953 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:23.597959 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:23.600364 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.600532 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:24.097607 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:24.097632 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:24.097640 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:24.097644 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:24.100185 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:24.596899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:24.596940 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:24.596951 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:24.596957 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:24.599633 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:25.097761 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:25.097789 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:25.097803 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:25.097808 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:25.100205 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:25.596931 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:25.596958 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:25.596969 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:25.596974 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:25.599583 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:26.097899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:26.097925 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:26.097934 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:26.097938 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:26.100330 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:26.100425 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:26.597539 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:26.597566 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:26.597575 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:26.597580 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:26.600215 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:27.096966 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:27.096998 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:27.097007 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:27.097012 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:27.099631 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:27.597574 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:27.597600 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:27.597607 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:27.597612 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:27.599913 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:28.097869 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:28.097894 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:28.097903 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:28.097906 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:28.100382 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:28.100477 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:28.597225 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:28.597254 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:28.597263 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:28.597269 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:28.599684 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:29.097190 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:29.097218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:29.097229 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:29.097262 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:29.099744 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:29.597605 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:29.597634 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:29.597645 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:29.597652 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:29.600430 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:30.097442 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:30.097468 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:30.097476 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:30.097480 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:30.099457 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:53:30.597276 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:30.597303 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:30.597312 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:30.597316 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:30.599873 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:30.599951 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:31.097106 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:31.097144 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:31.097153 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:31.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:31.099513 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:31.597757 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:31.597783 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:31.597794 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:31.597798 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:31.600463 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:32.097182 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:32.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:32.097215 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:32.097219 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:32.099765 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:32.597512 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:32.597537 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:32.597546 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:32.597551 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:32.599820 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:33.097643 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:33.097666 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:33.097674 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:33.097678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:33.099796 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:33.099884 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:33.597718 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:33.597746 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:33.597755 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:33.597765 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:33.600269 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:34.097517 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:34.097544 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:34.097553 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:34.097558 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:34.100747 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:34.597531 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:34.597558 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:34.597567 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:34.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:34.599907 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:35.097832 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:35.097857 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:35.097869 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:35.097875 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:35.100197 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:35.100304 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:35.596881 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:35.596909 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:35.596918 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:35.596921 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:35.599227 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:36.097506 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:36.097528 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:36.097537 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:36.097541 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:36.099779 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:36.597044 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:36.597075 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:36.597086 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:36.597090 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:36.599704 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:37.097488 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:37.097512 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:37.097521 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:37.097527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:37.099413 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:53:37.596959 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:37.596985 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:37.596994 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:37.596998 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:37.599807 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:37.599901 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:38.097637 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:38.097663 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:38.097673 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:38.097678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:38.100336 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:38.597075 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:38.597101 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:38.597110 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:38.597115 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:38.599545 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:39.097005 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:39.097031 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:39.097042 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:39.097047 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:39.099289 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:39.596971 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:39.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:39.597006 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:39.597011 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:39.599228 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:40.097179 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:40.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:40.097215 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:40.097221 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:40.099966 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:40.100061 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:40.597818 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:40.597844 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:40.597854 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:40.597859 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:40.600104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:41.097551 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:41.097574 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:41.097586 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:41.097593 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:41.099851 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:41.596971 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:41.596996 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:41.597005 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:41.597008 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:41.599346 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.097228 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:42.097253 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:42.097262 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:42.097268 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:42.099597 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.597496 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:42.597522 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:42.597537 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:42.597542 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:42.599923 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.600028 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:43.097893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:43.097928 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:43.097940 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:43.097946 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:43.100249 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:43.597079 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:43.597103 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:43.597111 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:43.597115 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:43.599554 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:44.097935 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:44.097963 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:44.097972 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:44.097978 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:44.100650 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:44.597578 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:44.597602 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:44.597611 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:44.597615 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:44.599830 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:45.097892 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:45.097932 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:45.097940 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:45.097960 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:45.100091 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:45.100177 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:45.596937 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:45.596965 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:45.596975 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:45.596982 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:45.599620 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:46.097332 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:46.097359 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:46.097367 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:46.097373 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:46.099777 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:46.597031 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:46.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:46.597068 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:46.597075 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:46.599403 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:47.097731 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:47.097757 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:47.097766 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:47.097769 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:47.100280 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:47.100377 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:47.597123 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:47.597151 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:47.597170 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:47.597175 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:47.599534 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:48.097336 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:48.097361 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:48.097370 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:48.097374 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:48.099675 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:48.597501 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:48.597534 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:48.597547 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:48.597560 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:48.600236 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.097710 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:49.097738 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:49.097747 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:49.097750 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:49.100057 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.596902 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:49.596926 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:49.596935 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:49.596941 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:49.599460 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.599564 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:50.097595 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:50.097620 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:50.097629 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:50.097633 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:50.099825 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:50.597754 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:50.597780 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:50.597789 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:50.597793 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:50.600075 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.097870 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:51.097899 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:51.097909 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:51.097929 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:51.100654 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.596969 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:51.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:51.597006 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:51.597010 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:51.599564 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.599659 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:52.097262 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:52.097289 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:52.097297 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:52.097302 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:52.099885 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:52.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:52.597649 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:52.597657 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:52.597662 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:52.600287 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.097029 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:53.097056 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:53.097064 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:53.097070 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:53.100094 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.597857 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:53.597883 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:53.597892 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:53.597896 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:53.600381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.600486 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:54.097694 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:54.097720 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:54.097733 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:54.097739 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:54.100246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:54.596985 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:54.597015 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:54.597024 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:54.597029 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:54.599531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:55.097645 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:55.097670 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:55.097678 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:55.097682 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:55.100175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:55.596893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:55.596928 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:55.596937 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:55.596942 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:55.599467 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:56.097332 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:56.097359 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:56.097367 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:56.097372 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:56.099838 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:56.099935 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:56.597119 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:56.597143 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:56.597152 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:56.597156 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:56.599329 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:57.097196 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:57.097223 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:57.097233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:57.097238 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:57.099869 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:57.597766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:57.597794 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:57.597806 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:57.597810 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:57.600130 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.096957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:58.096983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:58.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:58.096999 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:58.099238 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.597087 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:58.597112 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:58.597126 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:58.597132 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:58.599330 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.599420 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:59.097878 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:59.097909 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:59.097921 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:59.097927 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:59.100274 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:59.597081 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:59.597111 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:59.597122 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:59.597127 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:59.599692 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:00.097700 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:00.097709 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:00.097712 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:00.100091 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.597900 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:00.597929 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:00.597940 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:00.597946 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:00.600276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.600373 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:01.097002 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:01.097028 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:01.097036 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:01.097042 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:01.099132 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:01.597696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:01.597720 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:01.597729 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:01.597734 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:01.600078 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:02.096932 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:02.096958 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:02.096966 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:02.096971 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:02.099544 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:02.597385 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:02.597411 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:02.597419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:02.597424 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:02.599758 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:03.097724 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:03.097751 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:03.097759 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:03.097763 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:03.099959 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:03.100080 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:03.596849 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:03.596874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:03.596883 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:03.596887 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:03.599335 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:04.097559 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:04.097583 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:04.097591 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:04.097596 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:04.099995 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:04.597777 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:04.597812 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:04.597832 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:04.597838 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:04.600226 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.097053 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:05.097079 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:05.097088 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:05.097092 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:05.099413 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.597132 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:05.597157 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:05.597175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:05.597181 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:05.599523 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.599615 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:06.097257 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:06.097285 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:06.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:06.097298 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:06.099686 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:06.597194 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:06.597218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:06.597233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:06.597237 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:06.599753 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:07.097514 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:07.097540 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:07.097548 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:07.097555 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:07.100208 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:07.596890 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:07.596917 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:07.596926 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:07.596929 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:07.599139 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:08.096999 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:08.097025 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:08.097034 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:08.097038 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:08.099440 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:08.099538 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:08.597199 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:08.597225 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:08.597233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:08.597236 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:08.599496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:09.096957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:09.096982 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:09.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:09.096995 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:09.099328 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:09.597143 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:09.597166 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:09.597175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:09.597187 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:09.599350 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:10.097206 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:10.097231 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:10.097240 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:10.097243 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:10.099687 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:10.099779 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:10.597576 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:10.597599 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:10.597608 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:10.597613 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:10.599844 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:11.097696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:11.097722 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:11.097730 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:11.097735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:11.100237 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:11.597785 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:11.597807 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:11.597816 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:11.597823 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:11.600490 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.097100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:12.097126 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:12.097135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:12.097140 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:12.099612 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.597382 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:12.597416 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:12.597430 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:12.597439 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:12.599678 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.599758 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:13.097501 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:13.097526 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:13.097535 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:13.097540 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:13.099917 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:13.597744 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:13.597770 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:13.597779 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:13.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:13.600202 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:14.097453 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:14.097481 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:14.097491 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:14.097495 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:14.100217 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:14.596880 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:14.596907 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:14.596916 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:14.596921 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:14.599285 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:15.097175 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:15.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:15.097209 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:15.097212 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:15.099276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:15.099364 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:15.597074 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:15.597108 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:15.597120 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:15.597125 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:15.599444 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:16.097331 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:16.097360 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:16.097373 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:16.097383 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:16.099711 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:16.597474 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:16.597502 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:16.597512 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:16.597517 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:16.599821 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:17.097721 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:17.097747 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:17.097762 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:17.097768 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:17.100198 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:17.100276 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:17.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:17.597006 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:17.597014 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:17.597018 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:17.599367 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:18.097273 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:18.097299 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:18.097310 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:18.097314 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:18.099609 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:18.597568 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:18.597593 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:18.597602 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:18.597606 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:18.600731 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:54:19.097140 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:19.097166 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:19.097175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:19.097180 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:19.099397 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:19.597213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:19.597238 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:19.597247 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:19.597252 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:19.599471 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:19.599566 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:20.097477 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:20.097502 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:20.097511 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:20.097515 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:20.099861 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:20.597797 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:20.597825 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:20.597837 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:20.597845 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:20.600174 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.097026 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:21.097053 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:21.097066 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:21.097072 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:21.099500 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.597281 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:21.597304 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:21.597313 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:21.597317 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:21.599496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.599588 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:22.097325 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:22.097355 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:22.097366 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:22.097370 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:22.099812 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:22.597762 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:22.597792 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:22.597804 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:22.597817 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:22.599813 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:54:23.097828 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:23.097858 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:23.097871 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:23.097881 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:23.100396 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:23.597213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:23.597241 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:23.597252 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:23.597258 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:23.599717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:23.599796 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:24.096996 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:24.097021 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:24.097049 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:24.097055 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:24.099311 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:24.597126 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:24.597149 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:24.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:24.597162 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:24.599602 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:25.097695 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:25.097703 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:25.097710 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:25.099822 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.597641 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:25.597667 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:25.597675 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:25.597678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:25.600012 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.600100 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:26.097816 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:26.097842 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:26.097850 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:26.097854 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:26.100489 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:26.597097 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:26.597122 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:26.597132 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:26.597137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:26.599865 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:27.097687 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:27.097714 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:27.097723 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:27.097728 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:27.100355 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:27.597087 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:27.597111 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:27.597124 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:27.597128 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:27.599434 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:28.097160 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:28.097192 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:28.097200 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:28.097205 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:28.099497 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:28.099582 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:28.597237 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:28.597261 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:28.597272 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:28.597278 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:28.599694 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:29.097091 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:29.097118 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:29.097127 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:29.097132 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:29.099540 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:29.597363 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:29.597392 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:29.597405 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:29.597411 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:29.600172 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:30.097121 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:30.097144 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:30.097153 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:30.097157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:30.099513 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:30.099612 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:30.597347 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:30.597371 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:30.597380 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:30.597384 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:30.600156 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:31.096952 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:31.096988 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:31.096997 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:31.097001 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:31.099465 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:31.597116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:31.597143 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:31.597153 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:31.597158 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:31.599567 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:32.097317 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:32.097346 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:32.097358 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:32.097365 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:32.099660 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:32.099757 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:32.597405 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:32.597430 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:32.597439 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:32.597441 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:32.599811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:33.097627 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:33.097653 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:33.097662 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:33.097667 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:33.099982 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:33.597753 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:33.597778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:33.597787 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:33.597792 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:33.600559 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:34.097871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:34.097899 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:34.097912 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:34.097919 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:34.100469 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:34.100556 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:34.597193 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:34.597217 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:34.597226 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:34.597232 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:34.600162 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:35.097109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:35.097135 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:35.097144 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:35.097149 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:35.099576 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:35.597285 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:35.597313 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:35.597326 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:35.597333 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:35.599938 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.096921 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:36.096946 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:36.096954 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:36.096959 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:36.099227 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.597866 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:36.597904 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:36.597913 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:36.597919 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:36.600354 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.600463 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:37.097063 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:37.097090 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:37.097100 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:37.097105 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:37.099379 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:37.597122 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:37.597146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:37.597154 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:37.597158 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:37.599519 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.097366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:38.097393 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:38.097408 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:38.097414 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:38.099965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.597915 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:38.597940 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:38.597949 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:38.597954 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:38.600572 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.600660 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:39.097060 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:39.097087 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:39.097096 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:39.097101 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:39.099507 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:39.597337 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:39.597362 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:39.597371 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:39.597375 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:39.599715 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:40.097688 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:40.097713 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:40.097724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:40.097729 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:40.100033 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:40.596909 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:40.596939 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:40.596951 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:40.596957 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:40.599175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:41.097072 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:41.097099 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:41.097107 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:41.097111 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:41.099460 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:41.099539 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:41.597139 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:41.597165 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:41.597174 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:41.597178 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:41.599709 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:42.097560 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:42.097587 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:42.097595 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:42.097600 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:42.099863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:42.597812 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:42.597845 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:42.597862 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:42.597870 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:42.600230 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:43.096959 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:43.096985 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:43.096994 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:43.096999 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:43.099603 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:43.099685 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:43.597369 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:43.597397 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:43.597407 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:43.597412 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:43.599491 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:44.097845 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:44.097872 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:44.097882 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:44.097886 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:44.100129 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:44.597908 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:44.597935 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:44.597944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:44.597949 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:44.600197 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.097116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:45.097145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:45.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:45.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:45.099461 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.597363 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:45.597392 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:45.597403 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:45.597408 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:45.599811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.599899 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:46.097776 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:46.097801 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:46.097809 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:46.097814 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:46.100355 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:46.597079 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:46.597104 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:46.597112 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:46.597118 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:46.599632 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.097368 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:47.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:47.097423 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:47.097427 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:47.099773 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.597600 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:47.597624 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:47.597632 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:47.597637 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:47.600105 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.600192 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:48.096873 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:48.096905 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:48.096921 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:48.096927 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:48.099178 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:48.596912 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:48.596938 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:48.596945 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:48.596952 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:48.599004 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.097608 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:49.097631 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:49.097641 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:49.097645 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:49.099908 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.597696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:49.597722 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:49.597730 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:49.597735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:49.600131 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.600216 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:50.097068 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:50.097094 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:50.097103 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:50.097108 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:50.099234 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:50.596970 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:50.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:50.597008 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:50.597012 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:50.599499 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.097376 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:51.097404 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:51.097433 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:51.097437 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:51.099811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.597585 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:51.597611 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:51.597620 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:51.597624 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:51.600264 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.600359 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:52.097120 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:52.097146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:52.097155 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:52.097159 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:52.100007 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:52.596856 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:52.596893 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:52.596902 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:52.596908 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:52.599385 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:53.097209 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:53.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:53.097245 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:53.097249 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:53.099552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:53.597353 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:53.597378 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:53.597387 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:53.597396 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:53.599946 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:54.097385 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:54.097410 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:54.097419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:54.097425 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:54.099753 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:54.099849 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:54.597114 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:54.597140 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:54.597152 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:54.597159 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:54.599304 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:55.097077 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:55.097101 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:55.097109 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:55.097116 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:55.099594 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:55.597394 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:55.597430 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:55.597443 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:55.597448 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:55.599922 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:56.097857 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:56.097882 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:56.097891 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:56.097896 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:56.099961 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:56.100052 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:56.597806 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:56.597832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:56.597841 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:56.597846 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:56.600303 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:57.097159 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:57.097187 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:57.097195 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:57.097200 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:57.099508 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:57.597505 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:57.597532 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:57.597541 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:57.597545 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:57.600204 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.097048 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:58.097074 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:58.097082 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:58.097086 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:58.099381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.597205 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:58.597230 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:58.597239 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:58.597245 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:58.599451 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.599546 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:59.097886 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:59.097918 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:59.097931 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:59.097939 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:59.100163 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:59.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:59.597010 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:59.597021 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:59.597026 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:59.599059 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:00.097066 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:00.097091 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:00.097103 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:00.097109 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:00.099359 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:00.597072 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:00.597098 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:00.597107 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:00.597113 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:00.599230 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:01.096958 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:01.096983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:01.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:01.096997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:01.099098 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:01.099184 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:01.596893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:01.596921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:01.596933 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:01.596939 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:01.599452 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:02.097155 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:02.097182 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:02.097191 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:02.097197 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:02.099208 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:02.596931 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:02.596957 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:02.596968 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:02.596973 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:02.598907 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:03.097709 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:03.097736 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:03.097744 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:03.097749 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:03.100088 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:03.100185 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:03.597905 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:03.597933 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:03.597944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:03.597949 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:03.600246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:04.097651 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:04.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:04.097687 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:04.097693 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:04.100045 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:04.597839 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:04.597876 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:04.597885 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:04.597890 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:04.600163 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.097176 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:05.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:05.097210 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:05.097214 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:05.099624 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.597323 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:05.597350 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:05.597360 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:05.597365 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:05.599598 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.599695 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:06.097552 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:06.097582 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:06.097591 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:06.097595 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:06.099900 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:06.597946 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:06.597974 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:06.597982 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:06.597988 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:06.600426 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:07.097279 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:07.097306 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:07.097315 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:07.097320 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:07.099371 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:07.597212 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:07.597236 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:07.597245 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:07.597250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:07.599340 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:08.097240 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:08.097274 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:08.097289 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:08.097296 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:08.099717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:08.099814 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:08.597662 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:08.597688 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:08.597697 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:08.597702 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:08.599709 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:09.097250 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:09.097278 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:09.097289 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:09.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:09.099634 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:09.597565 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:09.597589 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:09.597598 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:09.597603 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:09.599920 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.097101 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:10.097125 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:10.097136 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:10.097141 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:10.099632 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.597582 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:10.597608 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:10.597617 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:10.597623 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:10.599909 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.600015 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:11.097848 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:11.097875 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:11.097884 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:11.097889 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:11.100388 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:11.597033 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:11.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:11.597068 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:11.597073 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:11.599446 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:12.097209 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:12.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:12.097246 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:12.097251 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:12.099596 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:12.597381 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:12.597409 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:12.597419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:12.597425 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:12.599739 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:13.097653 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:13.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:13.097694 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:13.097698 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:13.100085 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:13.100162 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:13.596932 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:13.596960 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:13.596970 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:13.596976 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:13.599364 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:14.097757 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:14.097784 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:14.097793 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:14.097799 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:14.100496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:14.597210 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:14.597235 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:14.597244 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:14.597248 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:14.599610 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:15.097782 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:15.097807 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:15.097819 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:15.097824 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:15.101005 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:55:15.101098 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:15.597806 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:15.597832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:15.597841 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:15.597844 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:15.600361 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:16.097098 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:16.097124 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:16.097133 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:16.097138 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:16.099616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:16.597475 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:16.597501 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:16.597509 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:16.597514 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:16.599989 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.097804 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:17.097832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:17.097842 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:17.097849 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:17.100125 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.597891 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:17.597921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:17.597930 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:17.597934 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:17.600307 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.600400 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:18.097041 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:18.097068 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:18.097076 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:18.097082 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:18.099561 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:18.597301 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:18.597328 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:18.597337 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:18.597341 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:18.599635 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:19.097188 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:19.097214 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:19.097223 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:19.097228 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:19.099493 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:19.597192 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:19.597215 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:19.597224 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:19.597229 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:19.599599 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:20.097639 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:20.097663 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:20.097671 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:20.097675 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:20.099803 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:20.099912 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:20.597725 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:20.597750 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:20.597759 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:20.597764 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:20.600274 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:21.097135 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:21.097164 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:21.097173 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:21.097178 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:21.099615 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:21.597251 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:21.597300 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:21.597309 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:21.597313 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:21.599653 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.097498 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:22.097523 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:22.097536 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:22.097542 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:22.099623 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.597528 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:22.597557 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:22.597565 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:22.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:22.599837 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.599933 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:23.097809 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:23.097835 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:23.097846 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:23.097851 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:23.099889 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:23.597818 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:23.597845 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:23.597858 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:23.597865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:23.599919 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.097248 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:24.097280 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:24.097293 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:24.097299 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:24.099650 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.597564 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:24.597589 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:24.597598 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:24.597603 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:24.600076 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.600182 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:25.097211 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:25.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:25.097246 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:25.097250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:25.099737 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:25.597673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:25.597700 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:25.597711 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:25.597718 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:25.600363 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:26.097116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:26.097145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:26.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:26.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:26.099408 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:26.597105 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:26.597133 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:26.597142 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:26.597147 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:26.599718 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:27.097532 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:27.097559 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:27.097569 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:27.097573 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:27.100132 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:27.100234 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:27.596843 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:27.596866 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:27.596875 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:27.596880 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:27.598858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:28.097716 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:28.097744 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:28.097752 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:28.097759 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:28.100226 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:28.596972 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:28.596999 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:28.597008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:28.597013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:28.599202 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:29.097781 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:29.097804 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:29.097814 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:29.097819 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:29.100259 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:29.100355 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:29.596974 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:29.597007 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:29.597018 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:29.597023 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:29.599234 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:30.097347 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:30.097369 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:30.097379 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:30.097384 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:30.099858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:30.597703 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:30.597732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:30.597742 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:30.597747 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:30.600213 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.096866 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:31.096894 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:31.096910 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:31.096925 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:31.098999 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.596844 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:31.596869 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:31.596877 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:31.596881 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:31.599416 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.599520 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:32.097294 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:32.097320 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:32.097329 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:32.097334 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:32.099664 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:32.597534 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:32.597562 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:32.597573 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:32.597581 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:32.599997 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.097885 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:33.097913 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:33.097925 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:33.097933 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:33.100424 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.597212 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:33.597245 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:33.597256 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:33.597261 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:33.599737 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.599825 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:34.096946 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:34.096977 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:34.096990 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:34.096997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:34.099325 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:34.597051 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:34.597077 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:34.597088 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:34.597094 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:34.599638 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:35.097797 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:35.097822 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:35.097832 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:35.097839 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:35.100270 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:35.597109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:35.597137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:35.597145 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:35.597150 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:35.599542 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:36.097465 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:36.097491 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:36.097500 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:36.097505 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:36.100187 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:36.100290 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:36.596906 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:36.596932 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:36.596944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:36.596950 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:36.599839 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:37.097766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:37.097792 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:37.097801 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:37.097807 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:37.099951 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:37.597950 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:37.597979 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:37.597989 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:37.597993 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:37.600410 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.097271 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:38.097298 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:38.097306 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:38.097311 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:38.099663 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.597601 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:38.597627 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:38.597636 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:38.597647 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:38.600447 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.600553 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:39.097748 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:39.097775 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:39.097786 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:39.097794 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:39.100150 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:39.596990 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:39.597019 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:39.597028 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:39.597032 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:39.599406 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:40.097366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:40.097396 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:40.097409 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:40.097416 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:40.099965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:40.597743 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:40.597771 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:40.597780 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:40.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:40.600273 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:41.096973 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:41.096997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:41.097006 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:41.097013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:41.099218 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:41.099337 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:41.596871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:41.596897 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:41.596908 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:41.596913 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:41.599017 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:42.097855 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:42.097889 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:42.097899 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:42.097905 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:42.101284 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:55:42.596957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:42.596996 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:42.597008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:42.597016 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:42.599231 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:43.097007 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:43.097034 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:43.097046 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:43.097051 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:43.099362 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:43.099452 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:43.597120 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:43.597147 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:43.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:43.597164 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:43.599396 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:44.097698 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:44.097725 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:44.097734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:44.097738 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:44.099914 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:44.597690 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:44.597715 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:44.597724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:44.597729 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:44.600159 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.097089 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:45.097112 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:45.097121 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:45.097125 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:45.099361 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.596975 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:45.597002 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:45.597010 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:45.597014 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:45.599569 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.599649 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:46.097457 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:46.097483 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:46.097492 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:46.097497 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:46.099821 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:46.597701 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:46.597727 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:46.597735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:46.597739 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:46.600275 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.097117 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:47.097141 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:47.097150 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:47.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:47.099568 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.597488 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:47.597514 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:47.597522 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:47.597527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:47.599944 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.600100 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:48.096867 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:48.096892 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:48.096908 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:48.096911 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:48.099730 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:48.597476 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:48.597506 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:48.597514 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:48.597520 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:48.599790 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:49.097193 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:49.097219 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:49.097228 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:49.097231 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:49.099213 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:49.596898 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:49.596923 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:49.596931 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:49.596935 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:49.599211 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:50.097588 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:50.097612 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:50.097622 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:50.097626 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:50.099587 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:50.099671 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:50.597293 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:50.597326 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:50.597335 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:50.597346 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:50.599755 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:51.097570 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:51.097599 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:51.097608 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:51.097613 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:51.100622 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:51.597436 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:51.597463 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:51.597472 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:51.597477 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:51.599799 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:52.097594 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:52.097621 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:52.097631 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:52.097635 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:52.100149 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:52.100239 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:52.596871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:52.596917 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:52.596927 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:52.596932 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:52.598861 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:53.097658 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:53.097687 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:53.097695 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:53.097701 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:53.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:53.597899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:53.597931 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:53.597939 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:53.597944 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:53.600381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:54.097688 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:54.097715 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:54.097724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:54.097728 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:54.100282 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:54.100365 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:54.597098 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:54.597127 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:54.597135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:54.597139 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:54.599447 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:55.097620 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:55.097648 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:55.097658 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:55.097663 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:55.100052 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:55.596920 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:55.596949 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:55.596957 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:55.596964 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:55.599399 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.097258 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:56.097285 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:56.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:56.097300 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:56.099626 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.597512 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:56.597537 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:56.597546 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:56.597550 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:56.599780 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.599862 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:57.097715 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:57.097744 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:57.097753 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:57.097758 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:57.100249 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:57.597037 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:57.597065 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:57.597073 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:57.597079 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:57.599410 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.097243 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:58.097271 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:58.097281 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:58.097286 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:58.099805 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.597743 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:58.597775 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:58.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:58.597791 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:58.599981 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.600099 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:59.097525 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:59.097554 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:59.097563 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:59.097567 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:59.100128 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:59.596950 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:59.596975 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:59.596983 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:59.596987 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:59.599509 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:00.097582 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:00.097606 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:00.097615 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:00.097620 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:00.099878 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:00.597634 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:00.597660 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:00.597669 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:00.597673 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:00.599960 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:01.097755 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:01.097779 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:01.097788 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:01.097793 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:01.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:01.100191 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:01.597749 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:01.597778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:01.597789 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:01.597799 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:01.600379 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:02.097127 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:02.097163 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:02.097172 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:02.097179 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:02.099347 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:02.597084 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:02.597114 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:02.597122 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:02.597126 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:02.599484 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.097203 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:03.097229 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:03.097244 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:03.097249 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:03.099750 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.597532 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:03.597557 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:03.597565 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:03.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:03.599887 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.599994 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:04.097156 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:04.097182 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:04.097193 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:04.097202 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:04.099543 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:04.597391 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:04.597422 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:04.597434 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:04.597441 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:04.599613 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:05.097696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:05.097719 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:05.097727 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:05.097733 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:05.101649 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:56:05.597340 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:05.597364 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:05.597373 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:05.597379 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:05.599888 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:05.600026 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:06.097634 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:06.097659 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:06.097668 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:06.097672 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:06.099863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:06.597652 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:06.597686 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:06.597701 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:06.597707 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:06.599965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:07.097782 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:07.097812 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:07.097825 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:07.097833 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:07.100367 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:07.597100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:07.597132 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:07.597144 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:07.597151 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:07.599359 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:08.097183 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:08.097225 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:08.097240 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:08.097248 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:08.099618 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:08.099711 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:08.597331 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:08.597358 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:08.597370 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:08.597377 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:08.599820 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:09.097223 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:09.097254 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:09.097264 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:09.097268 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:09.099655 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:09.597538 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:09.597562 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:09.597570 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:09.597576 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:09.599815 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:10.097831 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:10.097853 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:10.097861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:10.097865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:10.100242 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:10.100337 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:10.597109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:10.597137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:10.597146 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:10.597152 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:10.600167 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:11.097037 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:11.097061 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:11.097070 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:11.097076 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:11.099474 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:11.597114 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:11.597141 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:11.597150 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:11.597155 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:11.599707 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:12.097023 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:12.097048 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:12.097056 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:12.097061 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:12.099277 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:12.099371 1221070 node_ready.go:38] duration metric: took 4m0.002706246s for node "ha-290859-m02" to be "Ready" ...
	I0414 14:56:12.101227 1221070 out.go:201] 
	W0414 14:56:12.102352 1221070 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0414 14:56:12.102371 1221070 out.go:270] * 
	W0414 14:56:12.103364 1221070 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:56:12.104737 1221070 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea9e85492cab1       6e38f40d628db       3 minutes ago       Running             storage-provisioner       2                   22012253a39e5       storage-provisioner
	6def8b5e81c3c       8c811b4aec35f       4 minutes ago       Running             busybox                   1                   8810167e1850b       busybox-58667487b6-t6bgg
	d9bf8cef6e955       c69fa2e9cbf5f       4 minutes ago       Running             coredns                   1                   ae09d1f35f5bb       coredns-668d6bf9bc-wbn4p
	c3c2f4d5fe419       c69fa2e9cbf5f       4 minutes ago       Running             coredns                   1                   8b812c2dfd4e4       coredns-668d6bf9bc-qnl6q
	607041fc2f4ed       df3849d954c98       4 minutes ago       Running             kindnet-cni               1                   4c291c3e02236       kindnet-hm99t
	acc7b3f819a6b       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       1                   22012253a39e5       storage-provisioner
	1c01d86a74294       f1332858868e1       4 minutes ago       Running             kube-proxy                1                   756822c1e13ce       kube-proxy-cg945
	e8658abcccb8b       b6a454c5a800d       4 minutes ago       Running             kube-controller-manager   1                   b171c03689d46       kube-controller-manager-ha-290859
	29445064369e5       d8e673e7c9983       4 minutes ago       Running             kube-scheduler            1                   6e1304537402c       kube-scheduler-ha-290859
	6bb8bbfa1b317       a9e7e6b294baf       4 minutes ago       Running             etcd                      1                   d32dfc76a4340       etcd-ha-290859
	00b109770be1c       85b7a174738ba       4 minutes ago       Running             kube-apiserver            1                   eb5666eae29e1       kube-apiserver-ha-290859
	6dc42b262abf6       6ff023a402a69       4 minutes ago       Running             kube-vip                  0                   c4bd0bf012eaf       kube-vip-ha-290859
	24e6d7cfe7ea4       8c811b4aec35f       26 minutes ago      Exited              busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       26 minutes ago      Exited              coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       26 minutes ago      Exited              coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	2df8ccb8d6ed9       df3849d954c98       26 minutes ago      Exited              kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       26 minutes ago      Exited              kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	8263b35014337       b6a454c5a800d       26 minutes ago      Exited              kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       26 minutes ago      Exited              kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       26 minutes ago      Exited              etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       26 minutes ago      Exited              kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:52:05 ha-290859 containerd[832]: time="2025-04-14T14:52:05.640171349Z" level=info msg="StartContainer for \"6def8b5e81c3c293839e823e7db25b60e0f88e530e87f93ad6439e1ef8967337\" returns successfully"
	Apr 14 14:52:06 ha-290859 containerd[832]: time="2025-04-14T14:52:06.457242635Z" level=info msg="RemoveContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\""
	Apr 14 14:52:06 ha-290859 containerd[832]: time="2025-04-14T14:52:06.469888693Z" level=info msg="RemoveContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.268681775Z" level=info msg="CreateContainer within sandbox \"22012253a39e523fbee6ecb847d27dbb8e09ad98b80aa344f91a171c063bedc5\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.288966764Z" level=info msg="CreateContainer within sandbox \"22012253a39e523fbee6ecb847d27dbb8e09ad98b80aa344f91a171c063bedc5\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\""
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.289554135Z" level=info msg="StartContainer for \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\""
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.339537509Z" level=info msg="StartContainer for \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.225918045Z" level=info msg="RemoveContainer for \"9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.231418188Z" level=info msg="RemoveContainer for \"9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233079029Z" level=info msg="StopPodSandbox for \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233179127Z" level=info msg="TearDown network for sandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233192370Z" level=info msg="StopPodSandbox for \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233840780Z" level=info msg="RemovePodSandbox for \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233892071Z" level=info msg="Forcibly stopping sandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233958310Z" level=info msg="TearDown network for sandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.239481391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.239617741Z" level=info msg="RemovePodSandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240179712Z" level=info msg="StopPodSandbox for \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240271309Z" level=info msg="TearDown network for sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240298864Z" level=info msg="StopPodSandbox for \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240783074Z" level=info msg="RemovePodSandbox for \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240816354Z" level=info msg="Forcibly stopping sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240870755Z" level=info msg="TearDown network for sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.245855866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.245939634Z" level=info msg="RemovePodSandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> coredns [c3c2f4d5fe419392ff3850394da92847c7bcfe369f4d0eddffd38c2a59b41025] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48956 - 43158 "HINFO IN 5542730592661564248.5649616312753148618. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009354162s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1967277509]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30002ms):
	Trace[1967277509]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (14:52:35.692)
	Trace[1967277509]: [30.002592464s] [30.002592464s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1343823812]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1343823812]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (14:52:35.693)
	Trace[1343823812]: [30.00250289s] [30.00250289s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2019019817]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30004ms):
	Trace[2019019817]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (14:52:35.694)
	Trace[2019019817]: [30.004408468s] [30.004408468s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d9bf8cef6e9551ba044bfa75d53bebdabf94a544fb35bcba8ae9dda955c97297] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52958 - 12430 "HINFO IN 2501253073000439982.8063739159986489070. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007070061s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1427080852]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1427080852]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:52:35.691)
	Trace[1427080852]: [30.002092041s] [30.002092041s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1959333545]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1959333545]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:52:35.692)
	Trace[1959333545]: [30.002031471s] [30.002031471s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[910229496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30001ms):
	Trace[910229496]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:52:35.691)
	Trace[910229496]: [30.001488485s] [30.001488485s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:56:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    506c18f2-7f12-4001-8285-917ecaddf63d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         26m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m7s                   kube-proxy       
	  Normal   Starting                 26m                    kube-proxy       
	  Normal   Starting                 26m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26m                    kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26m                    kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26m                    kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26m                    node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal   NodeReady                26m                    kubelet          Node ha-290859 status is now: NodeReady
	  Normal   Starting                 4m25s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m25s (x8 over 4m25s)  kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m25s (x8 over 4m25s)  kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m25s (x7 over 4m25s)  kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m12s                  node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Warning  Rebooted                 4m11s                  kubelet          Node ha-290859 has been rebooted, boot id: 506c18f2-7f12-4001-8285-917ecaddf63d
	
	
	Name:               ha-290859-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_14T14_42_30_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:42:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:48:17 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:49:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:49:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:49:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 14 Apr 2025 14:46:33 +0000   Mon, 14 Apr 2025 14:49:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-290859-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96e9da9bd9e1490583702338b88b0c23
	  System UUID:                96e9da9b-d9e1-4905-8370-2338b88b0c23
	  Boot ID:                    b2600615-03c7-4984-8138-73f9baedc04e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8bg2x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kindnet-4jz25               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-sp56w            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-290859-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-290859-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-290859-m03 event: Registered Node ha-290859-m03 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-290859-m03 status is now: NodeReady
	  Normal  NodeNotReady             7m4s               node-controller  Node ha-290859-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           4m12s              node-controller  Node ha-290859-m03 event: Registered Node ha-290859-m03 in Controller
	
	
	==> dmesg <==
	[Apr14 14:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051074] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036733] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.829588] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.946390] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.551280] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.183144] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +0.054346] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061747] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.177698] systemd-fstab-generator[782]: Ignoring "noauto" option for root device
	[  +0.145567] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.269397] systemd-fstab-generator[824]: Ignoring "noauto" option for root device
	[  +1.160092] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +6.952352] kauditd_printk_skb: 197 callbacks suppressed
	[Apr14 14:52] kauditd_printk_skb: 40 callbacks suppressed
	[ +12.604617] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6bb8bbfa1b317897b9bcc96ba49e7c68f83cc4409dd69a72b86f0448aa2519ea] <==
	{"level":"info","ts":"2025-04-14T14:51:55.652582Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","added-peer-id":"fbb007bab925a598","added-peer-peer-urls":["https://192.168.39.110:2380"]}
	{"level":"info","ts":"2025-04-14T14:51:55.652820Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:51:55.652875Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:51:55.657644Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:55.677815Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T14:51:55.678882Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"fbb007bab925a598","initial-advertise-peer-urls":["https://192.168.39.110:2380"],"listen-peer-urls":["https://192.168.39.110:2380"],"advertise-client-urls":["https://192.168.39.110:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.110:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T14:51:55.678927Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T14:51:55.679144Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2025-04-14T14:51:55.679165Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2025-04-14T14:51:56.795570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.795959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgVoteResp from fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.796217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.796240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.797919Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:ha-290859 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:51:56.798371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:51:56.798558Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:51:56.799556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:51:56.799592Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T14:51:56.800393Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:56.801226Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:51:56.800393Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:56.802399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	{"level":"info","ts":"2025-04-14T14:42:12.425594Z","caller":"traceutil/trace.go:171","msg":"trace[593749251] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1973; }","duration":"103.549581ms","start":"2025-04-14T14:42:12.322004Z","end":"2025-04-14T14:42:12.425554Z","steps":["trace[593749251] 'read index received'  (duration: 102.720139ms)","trace[593749251] 'applied index is now lower than readState.Index'  (duration: 828.805µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:42:12.426144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.759593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:42:12.426196Z","caller":"traceutil/trace.go:171","msg":"trace[257637869] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1805; }","duration":"104.23976ms","start":"2025-04-14T14:42:12.321948Z","end":"2025-04-14T14:42:12.426188Z","steps":["trace[257637869] 'agreement among raft nodes before linearized reading'  (duration: 103.769974ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:42:12.425685Z","caller":"traceutil/trace.go:171","msg":"trace[874985590] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"128.996586ms","start":"2025-04-14T14:42:12.296675Z","end":"2025-04-14T14:42:12.425672Z","steps":["trace[874985590] 'process raft request'  (duration: 128.079961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:42:29.811595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.362023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932452365827166964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:25989634b465d2f3>","response":"size:42"}
	{"level":"info","ts":"2025-04-14T14:44:20.976766Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1495}
	{"level":"info","ts":"2025-04-14T14:44:20.980966Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1495,"took":"3.550898ms","hash":2769383186,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2031616,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-04-14T14:44:20.981013Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2769383186,"revision":1495,"compact-revision":955}
	{"level":"info","ts":"2025-04-14T14:49:20.985771Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2116}
	{"level":"info","ts":"2025-04-14T14:49:20.990796Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":2116,"took":"4.442405ms","hash":2965091083,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2244608,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-04-14T14:49:20.990930Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2965091083,"revision":2116,"compact-revision":1495}
	
	
	==> kernel <==
	 14:56:13 up 4 min,  0 users,  load average: 0.70, 0.34, 0.13
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:48:44.500441       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:48:54.500620       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:48:54.501802       1 main.go:301] handling current node
	I0414 14:48:54.501933       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:48:54.501959       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:04.501654       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:04.501878       1 main.go:301] handling current node
	I0414 14:49:04.502475       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:04.502663       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:14.500855       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:14.500928       1 main.go:301] handling current node
	I0414 14:49:14.500947       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:14.500953       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:24.509280       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:24.509428       1 main.go:301] handling current node
	I0414 14:49:24.509592       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:24.509696       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:34.500704       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:34.500778       1 main.go:301] handling current node
	I0414 14:49:34.500819       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:34.500825       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:44.504658       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:44.504751       1 main.go:301] handling current node
	I0414 14:49:44.504856       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:44.504972       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [607041fc2f4edc17de3caec2d00a9f9b49a94ed154254da72ec094a0f148db36] <==
	I0414 14:55:06.455751       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:16.456203       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:16.456257       1 main.go:301] handling current node
	I0414 14:55:16.456272       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:16.456277       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:26.465697       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:26.465845       1 main.go:301] handling current node
	I0414 14:55:26.465927       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:26.465968       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:36.463752       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:36.463830       1 main.go:301] handling current node
	I0414 14:55:36.463853       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:36.463859       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:46.456585       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:46.457113       1 main.go:301] handling current node
	I0414 14:55:46.457561       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:46.459726       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:56.464186       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:56.464300       1 main.go:301] handling current node
	I0414 14:55:56.464332       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:56.464345       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:56:06.455081       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:56:06.455167       1 main.go:301] handling current node
	I0414 14:56:06.455204       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:56:06.455229       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [00b109770be1cb3d772b7d440ccc36c098a8627e8280f195c263a0a87a6e0c07] <==
	I0414 14:51:57.932933       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0414 14:51:58.014528       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0414 14:51:58.014629       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0414 14:51:58.014535       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 14:51:58.023891       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0414 14:51:58.024459       1 shared_informer.go:320] Caches are synced for configmaps
	I0414 14:51:58.024473       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0414 14:51:58.024547       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0414 14:51:58.025376       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0414 14:51:58.035556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 14:51:58.035771       1 aggregator.go:171] initial CRD sync complete...
	I0414 14:51:58.035828       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:51:58.035845       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:51:58.035857       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:51:58.036008       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 14:51:58.036120       1 policy_source.go:240] refreshing policies
	I0414 14:51:58.097914       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0414 14:51:58.101123       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:51:58.918987       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:51:59.963976       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:52:04.263824       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0414 14:52:04.306348       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:52:04.363470       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:52:04.453440       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:52:04.454453       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:42:29.963750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.969981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.002µs"
	I0414 14:42:30.275380       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:30.614411       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:33.964410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-290859-m03"
	I0414 14:42:34.046665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:39.961881       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.191468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-290859-m03"
	I0414 14:42:49.192361       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.201252       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.216690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.679µs"
	I0414 14:42:49.217122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.948µs"
	I0414 14:42:49.230018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.053µs"
	I0414 14:42:52.664944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.387962ms"
	I0414 14:42:52.665652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.546µs"
	I0414 14:42:53.979890       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:43:00.010906       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:46:33.503243       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:47:25.635375       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:49:09.052122       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:49:09.070345       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:49:09.083390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="59.905µs"
	I0414 14:49:09.105070       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="10.887319ms"
	I0414 14:49:09.105381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.135µs"
	I0414 14:49:14.179848       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	
	
	==> kube-controller-manager [e8658abcccb8b10d531ad775050d96f3375e484efcbaba4d5509a7a22f3608a9] <==
	I0414 14:52:01.106966       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0414 14:52:01.111865       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 14:52:01.117197       1 shared_informer.go:320] Caches are synced for crt configmap
	I0414 14:52:01.118548       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0414 14:52:01.123285       1 shared_informer.go:320] Caches are synced for disruption
	I0414 14:52:01.154050       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:52:01.197460       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:52:01.197682       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 14:52:01.197815       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 14:52:01.207566       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:52:02.153254       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:52:04.272410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.559874ms"
	I0414 14:52:04.273686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="51.226µs"
	I0414 14:52:04.439056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.737014ms"
	I0414 14:52:04.439344       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="242.032µs"
	I0414 14:52:04.459376       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.444236ms"
	I0414 14:52:04.460062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="174.256µs"
	I0414 14:52:06.474796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.379µs"
	I0414 14:52:06.508895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.708µs"
	I0414 14:52:06.532239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="7.280916ms"
	I0414 14:52:06.532571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="115.282µs"
	I0414 14:52:38.517073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.719998ms"
	I0414 14:52:38.517449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="101.016µs"
	I0414 14:52:38.546449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.225146ms"
	I0414 14:52:38.546575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.763µs"
	
	
	==> kube-proxy [1c01d86a74294bbfd5f487ec85ffc0f35cc4b979ad90c940eea5a17a8e5f46fb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:52:05.724966       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:52:05.743076       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:52:05.743397       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:52:05.784686       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:52:05.784731       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:52:05.784755       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:52:05.786929       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:52:05.787617       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:52:05.787645       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:52:05.789983       1 config.go:199] "Starting service config controller"
	I0414 14:52:05.790536       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:52:05.791108       1 config.go:329] "Starting node config controller"
	I0414 14:52:05.791131       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:52:05.794555       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:52:05.796335       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:52:05.891275       1 shared_informer.go:320] Caches are synced for service config
	I0414 14:52:05.891550       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:52:05.901825       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [29445064369e58250458efcfeed9a28e6da75ce4bcb6f15c9e58844eb1ba811e] <==
	I0414 14:51:55.842470       1 serving.go:386] Generated self-signed cert in-memory
	W0414 14:51:57.981716       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 14:51:57.981805       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 14:51:57.981829       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 14:51:57.981840       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 14:51:58.035351       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 14:51:58.035404       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:51:58.038565       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 14:51:58.038986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 14:51:58.039147       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 14:51:58.039434       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 14:51:58.140699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:52:06 ha-290859 kubelet[906]: I0414 14:52:06.454237     906 scope.go:117] "RemoveContainer" containerID="922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b"
	Apr 14 14:52:06 ha-290859 kubelet[906]: I0414 14:52:06.455356     906 scope.go:117] "RemoveContainer" containerID="acc7b3f819a6b9fa74f5e5423aac252faa39c9dec24306ff130436d9a722188a"
	Apr 14 14:52:06 ha-290859 kubelet[906]: E0414 14:52:06.455566     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a98bb55f-5a73-4436-82eb-ae7534928039)\"" pod="kube-system/storage-provisioner" podUID="a98bb55f-5a73-4436-82eb-ae7534928039"
	Apr 14 14:52:17 ha-290859 kubelet[906]: I0414 14:52:17.265870     906 scope.go:117] "RemoveContainer" containerID="acc7b3f819a6b9fa74f5e5423aac252faa39c9dec24306ff130436d9a722188a"
	Apr 14 14:52:48 ha-290859 kubelet[906]: I0414 14:52:48.224225     906 scope.go:117] "RemoveContainer" containerID="9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d"
	Apr 14 14:52:48 ha-290859 kubelet[906]: E0414 14:52:48.281657     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:52:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:52:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:52:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:52:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:53:48 ha-290859 kubelet[906]: E0414 14:53:48.279850     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:53:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:53:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:53:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:53:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:54:48 ha-290859 kubelet[906]: E0414 14:54:48.287249     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:54:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:54:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:54:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:54:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:55:48 ha-290859 kubelet[906]: E0414 14:55:48.279366     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:55:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:55:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:55:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:55:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  4m13s (x2 over 4m16s)  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  15m (x3 over 26m)      default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  13m (x2 over 13m)      default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  7m48s (x3 over 13m)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m54s (x2 over 7m5s)   default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (473.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 node delete m03 -v=7 --alsologtostderr: (6.826674116s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 2 (414.699949ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-290859-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:56:21.864269 1222461 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:56:21.864368 1222461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:56:21.864376 1222461 out.go:358] Setting ErrFile to fd 2...
	I0414 14:56:21.864380 1222461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:56:21.864613 1222461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:56:21.864786 1222461 out.go:352] Setting JSON to false
	I0414 14:56:21.864831 1222461 mustload.go:65] Loading cluster: ha-290859
	I0414 14:56:21.864976 1222461 notify.go:220] Checking for updates...
	I0414 14:56:21.865424 1222461 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:56:21.865463 1222461 status.go:174] checking status of ha-290859 ...
	I0414 14:56:21.866115 1222461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:56:21.866179 1222461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:56:21.883374 1222461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33969
	I0414 14:56:21.883874 1222461 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:56:21.884439 1222461 main.go:141] libmachine: Using API Version  1
	I0414 14:56:21.884465 1222461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:56:21.884829 1222461 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:56:21.885026 1222461 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:56:21.886796 1222461 status.go:371] ha-290859 host status = "Running" (err=<nil>)
	I0414 14:56:21.886816 1222461 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:56:21.887134 1222461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:56:21.887177 1222461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:56:21.902273 1222461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44959
	I0414 14:56:21.902681 1222461 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:56:21.903073 1222461 main.go:141] libmachine: Using API Version  1
	I0414 14:56:21.903094 1222461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:56:21.903500 1222461 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:56:21.903686 1222461 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:56:21.906788 1222461 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:56:21.907165 1222461 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:56:21.907191 1222461 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:56:21.907289 1222461 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:56:21.907584 1222461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:56:21.907631 1222461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:56:21.922619 1222461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45353
	I0414 14:56:21.923094 1222461 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:56:21.923567 1222461 main.go:141] libmachine: Using API Version  1
	I0414 14:56:21.923588 1222461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:56:21.923951 1222461 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:56:21.924127 1222461 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:56:21.924291 1222461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:56:21.924322 1222461 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:56:21.926979 1222461 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:56:21.927413 1222461 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:56:21.927437 1222461 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:56:21.927569 1222461 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:56:21.927737 1222461 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:56:21.927914 1222461 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:56:21.928037 1222461 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:56:22.007165 1222461 ssh_runner.go:195] Run: systemctl --version
	I0414 14:56:22.013644 1222461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:56:22.028561 1222461 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:56:22.028617 1222461 api_server.go:166] Checking apiserver status ...
	I0414 14:56:22.028672 1222461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:56:22.041544 1222461 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1292/cgroup
	W0414 14:56:22.050867 1222461 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1292/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:56:22.050912 1222461 ssh_runner.go:195] Run: ls
	I0414 14:56:22.056111 1222461 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:56:22.061835 1222461 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:56:22.061860 1222461 status.go:463] ha-290859 apiserver status = Running (err=<nil>)
	I0414 14:56:22.061869 1222461 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:56:22.061884 1222461 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:56:22.062246 1222461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:56:22.062308 1222461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:56:22.078290 1222461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I0414 14:56:22.078809 1222461 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:56:22.079340 1222461 main.go:141] libmachine: Using API Version  1
	I0414 14:56:22.079362 1222461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:56:22.079714 1222461 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:56:22.079897 1222461 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:56:22.081594 1222461 status.go:371] ha-290859-m02 host status = "Running" (err=<nil>)
	I0414 14:56:22.081612 1222461 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:56:22.081919 1222461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:56:22.081957 1222461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:56:22.097842 1222461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0414 14:56:22.098317 1222461 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:56:22.098769 1222461 main.go:141] libmachine: Using API Version  1
	I0414 14:56:22.098791 1222461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:56:22.099153 1222461 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:56:22.099359 1222461 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:56:22.102333 1222461 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:56:22.102710 1222461 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:56:22.102748 1222461 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:56:22.102852 1222461 host.go:66] Checking if "ha-290859-m02" exists ...
	I0414 14:56:22.103152 1222461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:56:22.103194 1222461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:56:22.118419 1222461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35527
	I0414 14:56:22.118937 1222461 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:56:22.119398 1222461 main.go:141] libmachine: Using API Version  1
	I0414 14:56:22.119418 1222461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:56:22.119792 1222461 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:56:22.119978 1222461 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:56:22.120177 1222461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:56:22.120201 1222461 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:56:22.122583 1222461 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:56:22.122918 1222461 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:56:22.122962 1222461 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:56:22.123047 1222461 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:56:22.123189 1222461 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:56:22.123366 1222461 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:56:22.123494 1222461 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:56:22.202680 1222461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:56:22.216968 1222461 kubeconfig.go:125] found "ha-290859" server: "https://192.168.39.254:8443"
	I0414 14:56:22.217002 1222461 api_server.go:166] Checking apiserver status ...
	I0414 14:56:22.217061 1222461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0414 14:56:22.228708 1222461 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:56:22.228730 1222461 status.go:463] ha-290859-m02 apiserver status = Stopped (err=<nil>)
	I0414 14:56:22.228740 1222461 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.500988364s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node stop m02 -v=7         | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node start m02 -v=7        | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:43 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-290859 -v=7               | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:48 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-290859 -v=7                    | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:48 UTC | 14 Apr 25 14:51 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-290859 --wait=true -v=7        | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:51 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-290859                    | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:56 UTC |                     |
	| node    | ha-290859 node delete m03 -v=7       | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:56 UTC | 14 Apr 25 14:56 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:51:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:51:24.924385 1221070 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:51:24.924621 1221070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:51:24.924629 1221070 out.go:358] Setting ErrFile to fd 2...
	I0414 14:51:24.924633 1221070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:51:24.924808 1221070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:51:24.925345 1221070 out.go:352] Setting JSON to false
	I0414 14:51:24.926340 1221070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":23628,"bootTime":1744618657,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:51:24.926457 1221070 start.go:139] virtualization: kvm guest
	I0414 14:51:24.928287 1221070 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:51:24.929459 1221070 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:51:24.929469 1221070 notify.go:220] Checking for updates...
	I0414 14:51:24.931737 1221070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:51:24.933068 1221070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:24.934102 1221070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:51:24.935103 1221070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:51:24.936089 1221070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:51:24.937496 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:24.937602 1221070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:51:24.938128 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:24.938198 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:24.954244 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0414 14:51:24.954880 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:24.955464 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:24.955489 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:24.955900 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:24.956117 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:24.990242 1221070 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:51:24.991319 1221070 start.go:297] selected driver: kvm2
	I0414 14:51:24.991332 1221070 start.go:901] validating driver "kvm2" against &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-29
0859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:24.991491 1221070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:51:24.991827 1221070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:51:24.991902 1221070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:51:25.007424 1221070 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:51:25.008082 1221070 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:51:25.008124 1221070 cni.go:84] Creating CNI manager for ""
	I0414 14:51:25.008189 1221070 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0414 14:51:25.008244 1221070 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fal
se kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:25.008400 1221070 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:51:25.010019 1221070 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:51:25.011347 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:25.011399 1221070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:51:25.011409 1221070 cache.go:56] Caching tarball of preloaded images
	I0414 14:51:25.011488 1221070 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:51:25.011498 1221070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:51:25.011617 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:25.011799 1221070 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:51:25.011840 1221070 start.go:364] duration metric: took 23.649µs to acquireMachinesLock for "ha-290859"
	I0414 14:51:25.011855 1221070 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:51:25.011862 1221070 fix.go:54] fixHost starting: 
	I0414 14:51:25.012121 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:25.012156 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:25.026599 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0414 14:51:25.027122 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:25.027660 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:25.027688 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:25.028011 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:25.028229 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:25.028380 1221070 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:51:25.030231 1221070 fix.go:112] recreateIfNeeded on ha-290859: state=Stopped err=<nil>
	I0414 14:51:25.030265 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	W0414 14:51:25.030457 1221070 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:51:25.032663 1221070 out.go:177] * Restarting existing kvm2 VM for "ha-290859" ...
	I0414 14:51:25.033815 1221070 main.go:141] libmachine: (ha-290859) Calling .Start
	I0414 14:51:25.034026 1221070 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:51:25.034048 1221070 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:51:25.034729 1221070 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:51:25.035067 1221070 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:51:25.035424 1221070 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:51:25.036088 1221070 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:51:26.234459 1221070 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:51:26.235587 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.236072 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.236210 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.236086 1221099 retry.go:31] will retry after 280.740636ms: waiting for domain to come up
	I0414 14:51:26.518687 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.519197 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.519215 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.519169 1221099 retry.go:31] will retry after 243.427688ms: waiting for domain to come up
	I0414 14:51:26.765118 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.765534 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.765582 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.765501 1221099 retry.go:31] will retry after 427.840973ms: waiting for domain to come up
	I0414 14:51:27.195132 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:27.195585 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:27.195651 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:27.195569 1221099 retry.go:31] will retry after 469.259994ms: waiting for domain to come up
	I0414 14:51:27.666308 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:27.666685 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:27.666712 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:27.666664 1221099 retry.go:31] will retry after 657.912219ms: waiting for domain to come up
	I0414 14:51:28.326528 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:28.326927 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:28.326955 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:28.326878 1221099 retry.go:31] will retry after 750.684746ms: waiting for domain to come up
	I0414 14:51:29.078742 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:29.079136 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:29.079161 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:29.079097 1221099 retry.go:31] will retry after 1.04198738s: waiting for domain to come up
	I0414 14:51:30.122400 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:30.122774 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:30.122798 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:30.122735 1221099 retry.go:31] will retry after 1.397183101s: waiting for domain to come up
	I0414 14:51:31.522268 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:31.522683 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:31.522709 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:31.522652 1221099 retry.go:31] will retry after 1.778850774s: waiting for domain to come up
	I0414 14:51:33.303491 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:33.303831 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:33.303859 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:33.303809 1221099 retry.go:31] will retry after 2.116605484s: waiting for domain to come up
	I0414 14:51:35.422345 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:35.422804 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:35.422863 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:35.422810 1221099 retry.go:31] will retry after 2.695384495s: waiting for domain to come up
	I0414 14:51:38.120436 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:38.120841 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:38.120862 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:38.120804 1221099 retry.go:31] will retry after 2.291586599s: waiting for domain to come up
	I0414 14:51:40.414425 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:40.414781 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:40.414804 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:40.414750 1221099 retry.go:31] will retry after 4.202133346s: waiting for domain to come up
	I0414 14:51:44.622185 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.622671 1221070 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:51:44.622701 1221070 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:51:44.622714 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.623272 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.623307 1221070 main.go:141] libmachine: (ha-290859) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"}
	I0414 14:51:44.623333 1221070 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:51:44.623346 1221070 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:51:44.623353 1221070 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:51:44.625584 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.625894 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.625919 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.626118 1221070 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:51:44.626160 1221070 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:51:44.626206 1221070 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:51:44.626228 1221070 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:51:44.626236 1221070 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:51:44.746948 1221070 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:51:44.747341 1221070 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:51:44.748066 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:44.750502 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.750990 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.751020 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.751318 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:44.751530 1221070 machine.go:93] provisionDockerMachine start ...
	I0414 14:51:44.751557 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:44.751774 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.754154 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.754523 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.754549 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.754732 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.754917 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.755086 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.755209 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.755372 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.755592 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.755609 1221070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:51:44.859385 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:51:44.859420 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:44.859703 1221070 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:51:44.859733 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:44.859976 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.862591 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.862947 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.862982 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.863100 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.863336 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.863508 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.863682 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.863853 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.864206 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.864235 1221070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:51:44.980307 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:51:44.980345 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.983477 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.983889 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.983935 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.984061 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.984280 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.984453 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.984640 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.984799 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.985038 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.985053 1221070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:51:45.095107 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:51:45.095137 1221070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:51:45.095159 1221070 buildroot.go:174] setting up certificates
	I0414 14:51:45.095170 1221070 provision.go:84] configureAuth start
	I0414 14:51:45.095189 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:45.095535 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:45.098271 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.098658 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.098683 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.098857 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.101319 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.101590 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.101614 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.101756 1221070 provision.go:143] copyHostCerts
	I0414 14:51:45.101791 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:51:45.101823 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:51:45.101841 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:51:45.101907 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:51:45.101983 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:51:45.102001 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:51:45.102007 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:51:45.102032 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:51:45.102075 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:51:45.102097 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:51:45.102103 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:51:45.102122 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:51:45.102165 1221070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:51:45.257877 1221070 provision.go:177] copyRemoteCerts
	I0414 14:51:45.257960 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:51:45.257996 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.261081 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.261410 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.261440 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.261666 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.261911 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.262125 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.262285 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.340876 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:51:45.340975 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0414 14:51:45.362634 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:51:45.362694 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:51:45.383617 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:51:45.383700 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:51:45.404718 1221070 provision.go:87] duration metric: took 309.531359ms to configureAuth
	I0414 14:51:45.404750 1221070 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:51:45.405030 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:45.405049 1221070 machine.go:96] duration metric: took 653.506288ms to provisionDockerMachine
	I0414 14:51:45.405057 1221070 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:51:45.405066 1221070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:51:45.405099 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.405452 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:51:45.405481 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.408299 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.408642 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.408670 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.408811 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.408995 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.409115 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.409248 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.489101 1221070 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:51:45.493122 1221070 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:51:45.493155 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:51:45.493230 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:51:45.493340 1221070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:51:45.493354 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:51:45.493471 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:51:45.502327 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:51:45.523422 1221070 start.go:296] duration metric: took 118.348669ms for postStartSetup
	I0414 14:51:45.523473 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.523812 1221070 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:51:45.523846 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.526608 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.526952 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.526984 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.527122 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.527317 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.527485 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.527636 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.609005 1221070 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:51:45.609116 1221070 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:51:45.667143 1221070 fix.go:56] duration metric: took 20.655266779s for fixHost
	I0414 14:51:45.667202 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.670139 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.670591 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.670620 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.670836 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.671137 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.671338 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.671522 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.671692 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:45.671935 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:45.671948 1221070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:51:45.775787 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642305.752586107
	
	I0414 14:51:45.775819 1221070 fix.go:216] guest clock: 1744642305.752586107
	I0414 14:51:45.775848 1221070 fix.go:229] Guest: 2025-04-14 14:51:45.752586107 +0000 UTC Remote: 2025-04-14 14:51:45.667180128 +0000 UTC m=+20.782398303 (delta=85.405979ms)
	I0414 14:51:45.775882 1221070 fix.go:200] guest clock delta is within tolerance: 85.405979ms
	I0414 14:51:45.775900 1221070 start.go:83] releasing machines lock for "ha-290859", held for 20.764045917s
	I0414 14:51:45.775923 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.776216 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:45.778889 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.779306 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.779339 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.779531 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780063 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780265 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780372 1221070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:51:45.780417 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.780527 1221070 ssh_runner.go:195] Run: cat /version.json
	I0414 14:51:45.780554 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.783291 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783315 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783676 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.783718 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783821 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.783864 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.783889 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.784002 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.784123 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.784177 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.784299 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.784385 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.784475 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.784588 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.860084 1221070 ssh_runner.go:195] Run: systemctl --version
	I0414 14:51:45.888174 1221070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:51:45.893495 1221070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:51:45.893571 1221070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:51:45.908348 1221070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:51:45.908375 1221070 start.go:495] detecting cgroup driver to use...
	I0414 14:51:45.908446 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:51:45.935942 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:51:45.948409 1221070 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:51:45.948475 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:51:45.960942 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:51:45.974488 1221070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:51:46.086503 1221070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:51:46.230317 1221070 docker.go:233] disabling docker service ...
	I0414 14:51:46.230381 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:51:46.244297 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:51:46.256626 1221070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:51:46.408783 1221070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:51:46.531425 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:51:46.544279 1221070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:51:46.561206 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:51:46.570536 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:51:46.579933 1221070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:51:46.579987 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:51:46.589083 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:51:46.598516 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:51:46.608502 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:51:46.618260 1221070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:51:46.628002 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:51:46.637979 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:51:46.647708 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:51:46.657465 1221070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:51:46.666456 1221070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:51:46.666506 1221070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:51:46.679179 1221070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:51:46.688058 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:51:46.803994 1221070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:51:46.830741 1221070 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:51:46.830851 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:51:46.834666 1221070 retry.go:31] will retry after 684.331118ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:51:47.519413 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:51:47.524753 1221070 start.go:563] Will wait 60s for crictl version
	I0414 14:51:47.524814 1221070 ssh_runner.go:195] Run: which crictl
	I0414 14:51:47.528401 1221070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:51:47.567610 1221070 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:51:47.567684 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:51:47.592654 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:51:47.616410 1221070 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:51:47.617662 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:47.620124 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:47.620497 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:47.620523 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:47.620761 1221070 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:51:47.624661 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:51:47.636875 1221070 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:51:47.637062 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:47.637127 1221070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:51:47.668962 1221070 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:51:47.668993 1221070 containerd.go:534] Images already preloaded, skipping extraction
	I0414 14:51:47.669051 1221070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:51:47.700719 1221070 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:51:47.700748 1221070 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:51:47.700756 1221070 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:51:47.700911 1221070 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:51:47.701015 1221070 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:51:47.733009 1221070 cni.go:84] Creating CNI manager for ""
	I0414 14:51:47.733034 1221070 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0414 14:51:47.733058 1221070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:51:47.733086 1221070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:51:47.733246 1221070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:51:47.733266 1221070 kube-vip.go:115] generating kube-vip config ...
	I0414 14:51:47.733322 1221070 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:51:47.749704 1221070 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:51:47.749841 1221070 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:51:47.749916 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:51:47.759441 1221070 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:51:47.759517 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:51:47.768745 1221070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:51:47.784598 1221070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:51:47.800512 1221070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:51:47.816194 1221070 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:51:47.832579 1221070 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:51:47.836561 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:51:47.848464 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:51:47.961061 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:51:47.977110 1221070 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:51:47.977148 1221070 certs.go:194] generating shared ca certs ...
	I0414 14:51:47.977165 1221070 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:47.977358 1221070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:51:47.977426 1221070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:51:47.977447 1221070 certs.go:256] generating profile certs ...
	I0414 14:51:47.977567 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:51:47.977595 1221070 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d
	I0414 14:51:47.977626 1221070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:51:48.116172 1221070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d ...
	I0414 14:51:48.116203 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d: {Name:mk9edc6f7524dc9ba3b3dee538c59fbd77ccd148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.116397 1221070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d ...
	I0414 14:51:48.116412 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d: {Name:mk18dc0fd4ba99bfeaa95fae1a08a91f3d1054da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.116516 1221070 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:51:48.116679 1221070 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:51:48.116822 1221070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:51:48.116845 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:51:48.116863 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:51:48.116876 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:51:48.116888 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:51:48.116898 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:51:48.116907 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:51:48.116916 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:51:48.116925 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:51:48.116971 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:51:48.117008 1221070 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:51:48.117018 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:51:48.117040 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:51:48.117066 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:51:48.117086 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:51:48.117120 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:51:48.117150 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.117163 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.117173 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.117829 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:51:48.149051 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:51:48.177053 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:51:48.209173 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:51:48.253240 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:51:48.287575 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:51:48.318676 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:51:48.341473 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:51:48.364366 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:51:48.392240 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:51:48.414262 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:51:48.435434 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:51:48.451391 1221070 ssh_runner.go:195] Run: openssl version
	I0414 14:51:48.456643 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:51:48.467055 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.471094 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.471167 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.476620 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:51:48.487041 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:51:48.497119 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.501253 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.501303 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.506464 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:51:48.516670 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:51:48.526675 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.530724 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.530790 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.536779 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:51:48.547496 1221070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:51:48.551752 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 14:51:48.557436 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 14:51:48.563312 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 14:51:48.569039 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 14:51:48.575033 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 14:51:48.580579 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 14:51:48.586320 1221070 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:48.586432 1221070 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:51:48.586516 1221070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:51:48.621007 1221070 cri.go:89] found id: "731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0"
	I0414 14:51:48.621036 1221070 cri.go:89] found id: "0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f"
	I0414 14:51:48.621043 1221070 cri.go:89] found id: "922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b"
	I0414 14:51:48.621047 1221070 cri.go:89] found id: "2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d"
	I0414 14:51:48.621051 1221070 cri.go:89] found id: "e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2"
	I0414 14:51:48.621056 1221070 cri.go:89] found id: "9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d"
	I0414 14:51:48.621059 1221070 cri.go:89] found id: "8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847"
	I0414 14:51:48.621063 1221070 cri.go:89] found id: "3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3"
	I0414 14:51:48.621066 1221070 cri.go:89] found id: "b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c"
	I0414 14:51:48.621076 1221070 cri.go:89] found id: "341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b"
	I0414 14:51:48.621080 1221070 cri.go:89] found id: ""
	I0414 14:51:48.621136 1221070 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0414 14:51:48.634683 1221070 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:51:48Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0414 14:51:48.634779 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:51:48.644649 1221070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 14:51:48.644668 1221070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 14:51:48.644716 1221070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 14:51:48.653466 1221070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:51:48.653918 1221070 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-290859" does not appear in /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.654026 1221070 kubeconfig.go:62] /home/jenkins/minikube-integration/20512-1196368/kubeconfig needs updating (will repair): [kubeconfig missing "ha-290859" cluster setting kubeconfig missing "ha-290859" context setting]
	I0414 14:51:48.654307 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.654727 1221070 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.654871 1221070 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.110:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:51:48.655325 1221070 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:51:48.655343 1221070 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:51:48.655349 1221070 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:51:48.655355 1221070 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:51:48.655383 1221070 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:51:48.655782 1221070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 14:51:48.666379 1221070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0414 14:51:48.666416 1221070 kubeadm.go:597] duration metric: took 21.742146ms to restartPrimaryControlPlane
	I0414 14:51:48.666430 1221070 kubeadm.go:394] duration metric: took 80.118757ms to StartCluster
	I0414 14:51:48.666454 1221070 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.666542 1221070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.667357 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.667681 1221070 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:51:48.667715 1221070 start.go:241] waiting for startup goroutines ...
	I0414 14:51:48.667737 1221070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:51:48.667972 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:48.670730 1221070 out.go:177] * Enabled addons: 
	I0414 14:51:48.671774 1221070 addons.go:514] duration metric: took 4.043718ms for enable addons: enabled=[]
	I0414 14:51:48.671816 1221070 start.go:246] waiting for cluster config update ...
	I0414 14:51:48.671833 1221070 start.go:255] writing updated cluster config ...
	I0414 14:51:48.673542 1221070 out.go:201] 
	I0414 14:51:48.674918 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:48.675012 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:48.676439 1221070 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:51:48.677470 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:48.677501 1221070 cache.go:56] Caching tarball of preloaded images
	I0414 14:51:48.677610 1221070 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:51:48.677625 1221070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:51:48.677734 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:48.677945 1221070 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:51:48.677999 1221070 start.go:364] duration metric: took 29.352µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:51:48.678015 1221070 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:51:48.678023 1221070 fix.go:54] fixHost starting: m02
	I0414 14:51:48.678300 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:48.678338 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:48.694625 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46149
	I0414 14:51:48.695133 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:48.695644 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:48.695672 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:48.696059 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:48.696257 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:51:48.696396 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:51:48.697918 1221070 fix.go:112] recreateIfNeeded on ha-290859-m02: state=Stopped err=<nil>
	I0414 14:51:48.697944 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	W0414 14:51:48.698147 1221070 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:51:48.699709 1221070 out.go:177] * Restarting existing kvm2 VM for "ha-290859-m02" ...
	I0414 14:51:48.700791 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .Start
	I0414 14:51:48.701016 1221070 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:51:48.701037 1221070 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:51:48.701680 1221070 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:51:48.701964 1221070 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:51:48.702320 1221070 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:51:48.703123 1221070 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:51:49.928511 1221070 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:51:49.929302 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:49.929682 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:49.929753 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:49.929668 1221256 retry.go:31] will retry after 213.167481ms: waiting for domain to come up
	I0414 14:51:50.144304 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.144886 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.144914 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.144841 1221256 retry.go:31] will retry after 331.221156ms: waiting for domain to come up
	I0414 14:51:50.477450 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.477938 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.477993 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.477923 1221256 retry.go:31] will retry after 310.58732ms: waiting for domain to come up
	I0414 14:51:50.790523 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.791165 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.791199 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.791085 1221256 retry.go:31] will retry after 545.346683ms: waiting for domain to come up
	I0414 14:51:51.337935 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:51.338399 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:51.338425 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:51.338357 1221256 retry.go:31] will retry after 756.05518ms: waiting for domain to come up
	I0414 14:51:52.096242 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:52.096695 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:52.096730 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:52.096648 1221256 retry.go:31] will retry after 823.090094ms: waiting for domain to come up
	I0414 14:51:52.921657 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:52.922142 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:52.922184 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:52.922101 1221256 retry.go:31] will retry after 970.69668ms: waiting for domain to come up
	I0414 14:51:53.894927 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:53.895561 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:53.895594 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:53.895517 1221256 retry.go:31] will retry after 1.032622919s: waiting for domain to come up
	I0414 14:51:54.929442 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:54.929927 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:54.929952 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:54.929923 1221256 retry.go:31] will retry after 1.334812207s: waiting for domain to come up
	I0414 14:51:56.266967 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:56.267482 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:56.267510 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:56.267455 1221256 retry.go:31] will retry after 1.510894415s: waiting for domain to come up
	I0414 14:51:57.780426 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:57.780971 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:57.781004 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:57.780920 1221256 retry.go:31] will retry after 2.39467668s: waiting for domain to come up
	I0414 14:52:00.177702 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:00.178090 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:52:00.178121 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:52:00.178065 1221256 retry.go:31] will retry after 3.552625428s: waiting for domain to come up
	I0414 14:52:03.732281 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:03.732786 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:52:03.732838 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:52:03.732762 1221256 retry.go:31] will retry after 4.321714949s: waiting for domain to come up
	I0414 14:52:08.057427 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.057990 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.058015 1221070 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:52:08.058030 1221070 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:52:08.058568 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.058598 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"}
	I0414 14:52:08.058616 1221070 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:52:08.058624 1221070 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:52:08.058632 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:52:08.061480 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.061822 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.061855 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.062002 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:52:08.062025 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:52:08.062058 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:52:08.062073 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:52:08.062084 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:52:08.183207 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:52:08.183609 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:52:08.184236 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:08.186802 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.187282 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.187322 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.187609 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:52:08.187825 1221070 machine.go:93] provisionDockerMachine start ...
	I0414 14:52:08.187846 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.188131 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.190391 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.190830 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.190855 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.191024 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.191211 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.191410 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.191557 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.191706 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.192061 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.192080 1221070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:52:08.291480 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:52:08.291525 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.291906 1221070 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:52:08.291946 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.292200 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.295446 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.295895 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.295926 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.296203 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.296433 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.296612 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.296787 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.297073 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.297293 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.297305 1221070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:52:08.410482 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:52:08.410517 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.413198 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.413585 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.413621 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.413794 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.414028 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.414223 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.414369 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.414529 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.414731 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.414746 1221070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:52:08.522305 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:52:08.522338 1221070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:52:08.522355 1221070 buildroot.go:174] setting up certificates
	I0414 14:52:08.522368 1221070 provision.go:84] configureAuth start
	I0414 14:52:08.522377 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.522678 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:08.525718 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.526180 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.526208 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.526396 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.528768 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.529141 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.529174 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.529288 1221070 provision.go:143] copyHostCerts
	I0414 14:52:08.529323 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:52:08.529356 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:52:08.529364 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:52:08.529418 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:52:08.529544 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:52:08.529566 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:52:08.529571 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:52:08.529594 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:52:08.529638 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:52:08.529656 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:52:08.529663 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:52:08.529681 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:52:08.529727 1221070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:52:08.556497 1221070 provision.go:177] copyRemoteCerts
	I0414 14:52:08.556548 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:52:08.556569 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.559078 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.559480 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.559504 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.559685 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.559875 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.560067 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.560219 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.637398 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:52:08.637469 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:52:08.661142 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:52:08.661219 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:52:08.683109 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:52:08.683191 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 14:52:08.705705 1221070 provision.go:87] duration metric: took 183.321321ms to configureAuth
	I0414 14:52:08.705738 1221070 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:52:08.706026 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:08.706045 1221070 machine.go:96] duration metric: took 518.207609ms to provisionDockerMachine
	I0414 14:52:08.706054 1221070 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:52:08.706063 1221070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:52:08.706087 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.706363 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:52:08.706392 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.709099 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.709429 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.709457 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.709689 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.709903 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.710118 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.710263 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.791281 1221070 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:52:08.795310 1221070 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:52:08.795344 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:52:08.795409 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:52:08.795482 1221070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:52:08.795492 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:52:08.795570 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:52:08.806018 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:52:08.828791 1221070 start.go:296] duration metric: took 122.715902ms for postStartSetup
	I0414 14:52:08.828841 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.829192 1221070 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:52:08.829225 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.832093 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.832474 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.832500 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.832687 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.832874 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.833046 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.833191 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.914136 1221070 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:52:08.914227 1221070 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:52:08.970338 1221070 fix.go:56] duration metric: took 20.292306098s for fixHost
	I0414 14:52:08.970422 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.973148 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.973612 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.973662 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.973866 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.974071 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.974273 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.974383 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.974544 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.974752 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.974761 1221070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:52:09.075896 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642329.038020711
	
	I0414 14:52:09.075916 1221070 fix.go:216] guest clock: 1744642329.038020711
	I0414 14:52:09.075924 1221070 fix.go:229] Guest: 2025-04-14 14:52:09.038020711 +0000 UTC Remote: 2025-04-14 14:52:08.970369466 +0000 UTC m=+44.085587632 (delta=67.651245ms)
	I0414 14:52:09.075939 1221070 fix.go:200] guest clock delta is within tolerance: 67.651245ms
	I0414 14:52:09.075944 1221070 start.go:83] releasing machines lock for "ha-290859-m02", held for 20.397936123s
	I0414 14:52:09.075962 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.076232 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:09.079036 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.079425 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.079456 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.081479 1221070 out.go:177] * Found network options:
	I0414 14:52:09.082752 1221070 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:52:09.084044 1221070 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:52:09.084079 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084689 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084887 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084984 1221070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:52:09.085023 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:52:09.085117 1221070 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:52:09.085206 1221070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:52:09.085232 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:09.088187 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088476 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088613 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.088643 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088794 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:09.088903 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.088928 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088974 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:09.089083 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:09.089161 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:09.089227 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:09.089297 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:09.089336 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:09.089483 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:52:09.194292 1221070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:52:09.194439 1221070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:52:09.211568 1221070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:52:09.211600 1221070 start.go:495] detecting cgroup driver to use...
	I0414 14:52:09.211684 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:52:09.239355 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:52:09.252164 1221070 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:52:09.252247 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:52:09.266619 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:52:09.279466 1221070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:52:09.408504 1221070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:52:09.554621 1221070 docker.go:233] disabling docker service ...
	I0414 14:52:09.554705 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:52:09.567849 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:52:09.579882 1221070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:52:09.691627 1221070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:52:09.801979 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:52:09.824437 1221070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:52:09.841408 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:52:09.851062 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:52:09.860777 1221070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:52:09.860826 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:52:09.870133 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:52:09.879955 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:52:09.889567 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:52:09.899405 1221070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:52:09.909754 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:52:09.919673 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:52:09.929572 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:52:09.939053 1221070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:52:09.947490 1221070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:52:09.947546 1221070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:52:09.959627 1221070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:52:09.968379 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:10.086027 1221070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:52:10.118333 1221070 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:52:10.118430 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:52:10.122969 1221070 retry.go:31] will retry after 818.918333ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:52:10.943062 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:52:10.948132 1221070 start.go:563] Will wait 60s for crictl version
	I0414 14:52:10.948196 1221070 ssh_runner.go:195] Run: which crictl
	I0414 14:52:10.952231 1221070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:52:10.988005 1221070 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:52:10.988097 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:52:11.012963 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:52:11.038206 1221070 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:52:11.039588 1221070 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:52:11.040724 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:11.043716 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:11.044108 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:11.044129 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:11.044384 1221070 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:52:11.048381 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:52:11.060281 1221070 mustload.go:65] Loading cluster: ha-290859
	I0414 14:52:11.060535 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:11.060920 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:52:11.060972 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:52:11.076673 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0414 14:52:11.077200 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:52:11.077672 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:52:11.077694 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:52:11.078067 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:52:11.078244 1221070 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:52:11.079808 1221070 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:52:11.080127 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:52:11.080174 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:52:11.095417 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37849
	I0414 14:52:11.095844 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:52:11.096258 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:52:11.096277 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:52:11.096639 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:52:11.096826 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:52:11.096989 1221070 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:52:11.097003 1221070 certs.go:194] generating shared ca certs ...
	I0414 14:52:11.097029 1221070 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:52:11.097193 1221070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:52:11.097269 1221070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:52:11.097285 1221070 certs.go:256] generating profile certs ...
	I0414 14:52:11.097381 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:52:11.097463 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:52:11.097524 1221070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:52:11.097538 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:52:11.097560 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:52:11.097577 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:52:11.097593 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:52:11.097611 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:52:11.097629 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:52:11.097646 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:52:11.097662 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:52:11.097724 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:52:11.097762 1221070 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:52:11.097777 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:52:11.097809 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:52:11.097839 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:52:11.097866 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:52:11.097945 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:52:11.097992 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.098014 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.098038 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.098070 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:52:11.100966 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:52:11.101386 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:52:11.101405 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:52:11.101550 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:52:11.101731 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:52:11.101862 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:52:11.102010 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:52:11.175602 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:52:11.180006 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:52:11.189968 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:52:11.193728 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:52:11.203099 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:52:11.207009 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:52:11.216071 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:52:11.219518 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:52:11.228688 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:52:11.232239 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:52:11.241095 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:52:11.244486 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:52:11.253441 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:52:11.277269 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:52:11.299096 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:52:11.320223 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:52:11.341633 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:52:11.362868 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:52:11.386598 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:52:11.408609 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:52:11.430516 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:52:11.452312 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:52:11.474971 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:52:11.496336 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:52:11.511579 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:52:11.526436 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:52:11.541220 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:52:11.556734 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:52:11.573710 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:52:11.589103 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:52:11.604809 1221070 ssh_runner.go:195] Run: openssl version
	I0414 14:52:11.610110 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:52:11.620147 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.624394 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.624454 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.629850 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:52:11.639862 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:52:11.649796 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.653828 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.653894 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.659174 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:52:11.669032 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:52:11.678764 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.682817 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.682885 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.688098 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:52:11.697831 1221070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:52:11.701550 1221070 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:52:11.701601 1221070 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:52:11.701691 1221070 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:52:11.701720 1221070 kube-vip.go:115] generating kube-vip config ...
	I0414 14:52:11.701774 1221070 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:52:11.717854 1221070 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:52:11.717951 1221070 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:52:11.718009 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:52:11.727618 1221070 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:52:11.727676 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0414 14:52:11.736203 1221070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0414 14:52:11.751774 1221070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:52:11.768120 1221070 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:52:11.783489 1221070 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:52:11.787006 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:52:11.798424 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:11.903985 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:52:11.921547 1221070 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:52:11.921874 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:11.923383 1221070 out.go:177] * Verifying Kubernetes components...
	I0414 14:52:11.924548 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:12.079718 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:52:12.096131 1221070 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:52:12.096280 1221070 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0414 14:52:12.096344 1221070 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I0414 14:52:12.096629 1221070 node_ready.go:35] waiting up to 6m0s for node "ha-290859-m02" to be "Ready" ...
	I0414 14:52:12.096770 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:12.096778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:12.096786 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:12.096792 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:12.105014 1221070 round_trippers.go:581] Response Status: 404 Not Found in 8 milliseconds
	I0414 14:52:12.596840 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:12.596864 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:12.596873 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:12.596878 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:12.599193 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:13.096896 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:13.096921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:13.096930 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:13.096935 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:13.099008 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:13.597788 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:13.597813 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:13.597822 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:13.597826 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:13.600141 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:14.097364 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:14.097390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:14.097398 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:14.097401 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:14.099682 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:14.099822 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:14.597362 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:14.597390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:14.597401 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:14.597407 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:14.599923 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:15.096865 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:15.096890 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:15.096898 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:15.096903 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:15.099533 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:15.597246 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:15.597272 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:15.597280 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:15.597285 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:15.599591 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.096978 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:16.097005 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:16.097014 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:16.097019 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:16.099644 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.597351 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:16.597377 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:16.597385 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:16.597389 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:16.599794 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.599885 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:17.097583 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:17.097609 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:17.097621 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:17.097630 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:17.099987 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:17.597752 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:17.597777 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:17.597792 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:17.597798 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:17.599966 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.097796 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:18.097830 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:18.097843 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:18.097850 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:18.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.597881 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:18.597906 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:18.597918 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:18.597923 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:18.600349 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.600437 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:19.097732 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:19.097758 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:19.097766 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:19.097772 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:19.100346 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:19.597034 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:19.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:19.597074 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:19.597081 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:19.600054 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:20.097051 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:20.097075 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:20.097085 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:20.097091 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:20.099439 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:20.597189 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:20.597218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:20.597230 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:20.597234 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:20.599635 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:21.097052 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:21.097078 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:21.097090 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:21.097095 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:21.099916 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:21.100012 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:21.597682 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:21.597708 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:21.597716 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:21.597722 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:21.600175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:22.097764 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:22.097789 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:22.097798 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:22.097803 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:22.100278 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:22.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:22.597008 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:22.597017 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:22.597021 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:22.599616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.097388 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:23.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:23.097423 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:23.097428 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:23.099818 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:23.597655 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:23.597664 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:23.597669 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:23.600007 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.600102 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:24.097112 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:24.097137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:24.097147 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:24.097151 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:24.099644 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:24.597329 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:24.597355 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:24.597363 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:24.597369 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:24.599961 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:25.096893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:25.096919 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:25.096928 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:25.096934 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:25.098708 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:52:25.597473 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:25.597500 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:25.597509 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:25.597514 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:25.600056 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:25.600156 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:26.097355 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:26.097378 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:26.097387 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:26.097391 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:26.099832 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:26.597648 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:26.597673 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:26.597684 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:26.597687 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:26.600271 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:27.096929 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:27.096954 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:27.096963 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:27.096967 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:27.099168 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:27.596858 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:27.596884 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:27.596893 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:27.596899 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:27.599457 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:28.096940 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:28.096964 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:28.096972 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:28.097006 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:28.099432 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:28.099546 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:28.597101 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:28.597126 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:28.597135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:28.597140 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:28.599552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:29.097020 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:29.097048 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:29.097060 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:29.097067 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:29.099638 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:29.597365 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:29.597391 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:29.597399 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:29.597405 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:29.599700 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:30.097686 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:30.097711 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:30.097720 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:30.097726 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:30.099828 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:30.099939 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:30.597659 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:30.597687 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:30.597696 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:30.597701 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:30.600246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:31.097571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:31.097595 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:31.097603 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:31.097608 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:31.100169 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:31.597822 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:31.597851 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:31.597861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:31.597870 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:31.600466 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.097138 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:32.097164 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:32.097173 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:32.097177 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:32.099723 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.597477 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:32.597503 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:32.597511 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:32.597515 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:32.599830 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.599932 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:33.097613 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:33.097641 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:33.097649 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:33.097654 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:33.099925 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:33.597289 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:33.597314 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:33.597323 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:33.597327 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:33.599654 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:34.096888 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:34.096919 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:34.096927 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:34.096933 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:34.099431 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:34.596955 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:34.596980 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:34.596989 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:34.596993 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:34.599335 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:35.097100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:35.097123 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:35.097131 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:35.097137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:35.099289 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:35.099382 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:35.596984 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:35.597012 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:35.597021 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:35.597025 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:35.599385 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:36.097705 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:36.097729 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:36.097738 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:36.097743 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:36.100126 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:36.597126 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:36.597155 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:36.597165 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:36.597169 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:36.600643 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:37.097395 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:37.097421 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:37.097430 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:37.097434 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:37.099784 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:37.099868 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:37.597613 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:37.597644 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:37.597653 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:37.597658 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:37.599841 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:38.097708 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:38.097734 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:38.097743 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:38.097746 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:38.100373 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:38.597097 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:38.597124 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:38.597132 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:38.597137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:38.599858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:39.097386 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:39.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:39.097422 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:39.097428 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:39.099969 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:39.100071 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:39.597770 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:39.597797 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:39.597806 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:39.597811 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:39.600350 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:40.097448 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:40.097473 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:40.097482 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:40.097487 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:40.099992 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:40.597766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:40.597794 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:40.597802 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:40.597807 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:40.600235 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:41.097595 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:41.097620 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:41.097628 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:41.097633 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:41.100188 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:41.100291 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:41.597223 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:41.597251 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:41.597259 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:41.597264 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:41.599796 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:42.097539 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:42.097565 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:42.097574 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:42.097578 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:42.099998 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:42.596849 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:42.596874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:42.596882 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:42.596886 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:42.599276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.097056 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:43.097082 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:43.097091 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:43.097095 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:43.099531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.597247 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:43.597271 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:43.597279 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:43.597283 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:43.599641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.599742 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:44.097877 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:44.097905 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:44.097916 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:44.097922 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:44.100517 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:44.597248 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:44.597278 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:44.597286 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:44.597290 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:44.599800 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.097824 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:45.097852 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:45.097861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:45.097865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:45.100105 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.597856 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:45.597883 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:45.597892 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:45.597898 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:45.600432 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.600532 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:46.097855 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:46.097880 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:46.097888 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:46.097891 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:46.100551 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:46.597726 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:46.597754 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:46.597767 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:46.597772 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:46.600401 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:47.097070 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:47.097095 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:47.097104 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:47.097108 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:47.102860 1221070 round_trippers.go:581] Response Status: 404 Not Found in 5 milliseconds
	I0414 14:52:47.597648 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:47.597673 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:47.597682 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:47.597686 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:47.600174 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:48.096965 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:48.096990 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:48.096998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:48.097002 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:48.099639 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:48.099731 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:48.597371 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:48.597405 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:48.597416 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:48.597421 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:48.599718 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:49.097094 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:49.097133 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:49.097142 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:49.097145 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:49.099888 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:49.597678 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:49.597705 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:49.597713 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:49.597718 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:49.600370 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:50.097228 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:50.097253 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:50.097261 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:50.097266 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:50.100034 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:50.100119 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:50.597914 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:50.597948 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:50.597961 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:50.597967 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:50.601343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:51.097653 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:51.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:51.097690 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:51.097694 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:51.100291 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:51.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:51.597656 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:51.597667 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:51.597675 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:51.606437 1221070 round_trippers.go:581] Response Status: 404 Not Found in 8 milliseconds
	I0414 14:52:52.097142 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:52.097174 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:52.097186 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:52.097203 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:52.100953 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:52.101053 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:52.597793 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:52.597822 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:52.597836 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:52.597844 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:52.600495 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:53.097203 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:53.097229 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:53.097238 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:53.097242 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:53.099616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:53.597366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:53.597390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:53.597399 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:53.597404 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:53.599831 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.097057 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:54.097083 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:54.097092 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:54.097096 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:54.099423 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.596995 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:54.597022 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:54.597031 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:54.597042 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:54.599588 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.599693 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:55.097848 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:55.097874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:55.097882 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:55.097887 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:55.100242 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:55.597035 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:55.597062 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:55.597072 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:55.597077 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:55.599583 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.096912 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:56.096939 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:56.096948 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:56.096952 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:56.099376 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.597699 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:56.597725 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:56.597734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:56.597739 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:56.600266 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.600543 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:57.097172 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:57.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:57.097209 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:57.097215 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:57.099784 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:57.597610 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:57.597642 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:57.597655 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:57.597663 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:57.599863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.097691 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:58.097721 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:58.097734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:58.097740 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:58.100041 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.597837 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:58.597862 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:58.597870 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:58.597875 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:58.600624 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.600730 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:59.096948 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:59.096975 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:59.096984 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:59.096989 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:59.099096 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:59.597907 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:59.597935 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:59.597947 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:59.597953 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:59.600401 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:00.097602 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:00.097627 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:00.097636 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:00.097641 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:00.099750 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:00.597486 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:00.597512 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:00.597522 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:00.597527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:00.599885 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:01.097325 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:01.097358 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:01.097371 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:01.097391 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:01.099717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:01.099833 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:01.596958 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:01.596983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:01.596992 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:01.596997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:01.599356 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:02.097071 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:02.097122 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:02.097131 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:02.097138 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:02.099343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:02.597036 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:02.597063 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:02.597071 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:02.597075 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:02.599771 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:03.097565 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:03.097592 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:03.097600 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:03.097604 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:03.099792 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:03.099897 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:03.597552 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:03.597585 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:03.597595 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:03.597599 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:03.600018 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:04.096976 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:04.097001 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:04.097009 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:04.097013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:04.099528 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:04.597239 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:04.597267 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:04.597276 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:04.597283 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:04.599533 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:05.097665 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:05.097691 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:05.097699 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:05.097703 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:05.100338 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:05.100439 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:05.597081 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:05.597106 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:05.597116 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:05.597121 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:05.600398 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:06.097630 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:06.097656 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:06.097665 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:06.097670 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:06.100398 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:06.597714 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:06.597739 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:06.597748 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:06.597752 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:06.600470 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:07.097213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:07.097240 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:07.097250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:07.097253 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:07.099963 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:07.597789 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:07.597816 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:07.597826 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:07.597831 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:07.600855 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:07.600957 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:08.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:08.097701 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:08.097710 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:08.097715 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:08.100645 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:08.597358 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:08.597384 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:08.597393 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:08.597397 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:08.599788 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:09.097393 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:09.097420 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:09.097429 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:09.097434 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:09.099924 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:09.597707 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:09.597732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:09.597742 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:09.597747 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:09.599970 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:10.097178 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:10.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:10.097216 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:10.097221 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:10.099537 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:10.099624 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:10.597236 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:10.597263 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:10.597271 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:10.597275 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:10.599552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:11.097961 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:11.097993 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:11.098008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:11.098016 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:11.100563 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:11.597756 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:11.597782 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:11.597790 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:11.597795 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:11.600339 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:12.097054 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:12.097083 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:12.097093 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:12.097099 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:12.099641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:12.099739 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:12.597376 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:12.597402 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:12.597411 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:12.597417 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:12.599658 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:13.097459 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:13.097484 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:13.097492 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:13.097502 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:13.099810 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:13.597571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:13.597596 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:13.597605 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:13.597609 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:13.600010 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.096947 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:14.096970 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:14.096979 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:14.096990 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:14.099343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.597063 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:14.597091 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:14.597101 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:14.597105 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:14.599641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.599723 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:15.097631 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:15.097658 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:15.097668 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:15.097682 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:15.100287 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:15.597176 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:15.597202 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:15.597211 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:15.597215 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:15.599531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:16.097711 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:16.097732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:16.097742 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:16.097746 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:16.101211 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:16.597571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:16.597597 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:16.597606 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:16.597610 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:16.599963 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:16.600075 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:17.097758 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:17.097783 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:17.097792 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:17.097796 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:17.099932 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:17.597691 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:17.597718 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:17.597727 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:17.597733 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:17.600352 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:18.097050 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:18.097078 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:18.097089 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:18.097096 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:18.099428 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:18.597110 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:18.597145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:18.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:18.597166 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:18.599600 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:19.096963 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:19.096987 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:19.096998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:19.097003 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:19.099491 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:19.099580 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:19.597231 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:19.597263 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:19.597276 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:19.597283 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:19.600009 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:20.096886 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:20.096914 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:20.096926 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:20.096932 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:20.099209 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:20.596960 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:20.596986 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:20.596998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:20.597004 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:20.599960 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.097055 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:21.097077 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:21.097088 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:21.097094 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:21.099402 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.597633 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:21.597662 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:21.597674 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:21.597680 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:21.599894 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.600006 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:22.097732 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:22.097762 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:22.097774 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:22.097782 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:22.100319 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:22.597118 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:22.597146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:22.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:22.597163 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:22.599684 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.097462 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:23.097495 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:23.097507 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:23.097513 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:23.100099 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.597914 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:23.597944 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:23.597953 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:23.597959 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:23.600364 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.600532 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:24.097607 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:24.097632 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:24.097640 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:24.097644 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:24.100185 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:24.596899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:24.596940 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:24.596951 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:24.596957 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:24.599633 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:25.097761 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:25.097789 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:25.097803 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:25.097808 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:25.100205 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:25.596931 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:25.596958 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:25.596969 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:25.596974 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:25.599583 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:26.097899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:26.097925 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:26.097934 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:26.097938 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:26.100330 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:26.100425 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:26.597539 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:26.597566 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:26.597575 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:26.597580 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:26.600215 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:27.096966 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:27.096998 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:27.097007 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:27.097012 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:27.099631 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:27.597574 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:27.597600 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:27.597607 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:27.597612 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:27.599913 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:28.097869 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:28.097894 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:28.097903 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:28.097906 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:28.100382 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:28.100477 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:28.597225 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:28.597254 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:28.597263 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:28.597269 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:28.599684 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:29.097190 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:29.097218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:29.097229 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:29.097262 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:29.099744 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:29.597605 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:29.597634 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:29.597645 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:29.597652 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:29.600430 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:30.097442 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:30.097468 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:30.097476 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:30.097480 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:30.099457 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:53:30.597276 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:30.597303 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:30.597312 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:30.597316 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:30.599873 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:30.599951 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:31.097106 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:31.097144 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:31.097153 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:31.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:31.099513 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:31.597757 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:31.597783 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:31.597794 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:31.597798 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:31.600463 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:32.097182 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:32.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:32.097215 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:32.097219 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:32.099765 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:32.597512 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:32.597537 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:32.597546 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:32.597551 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:32.599820 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:33.097643 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:33.097666 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:33.097674 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:33.097678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:33.099796 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:33.099884 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:33.597718 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:33.597746 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:33.597755 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:33.597765 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:33.600269 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:34.097517 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:34.097544 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:34.097553 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:34.097558 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:34.100747 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:34.597531 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:34.597558 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:34.597567 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:34.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:34.599907 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:35.097832 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:35.097857 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:35.097869 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:35.097875 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:35.100197 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:35.100304 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:35.596881 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:35.596909 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:35.596918 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:35.596921 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:35.599227 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:36.097506 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:36.097528 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:36.097537 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:36.097541 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:36.099779 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:36.597044 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:36.597075 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:36.597086 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:36.597090 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:36.599704 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:37.097488 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:37.097512 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:37.097521 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:37.097527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:37.099413 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:53:37.596959 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:37.596985 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:37.596994 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:37.596998 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:37.599807 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:37.599901 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:38.097637 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:38.097663 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:38.097673 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:38.097678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:38.100336 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:38.597075 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:38.597101 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:38.597110 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:38.597115 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:38.599545 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:39.097005 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:39.097031 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:39.097042 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:39.097047 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:39.099289 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:39.596971 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:39.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:39.597006 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:39.597011 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:39.599228 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:40.097179 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:40.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:40.097215 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:40.097221 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:40.099966 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:40.100061 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:40.597818 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:40.597844 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:40.597854 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:40.597859 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:40.600104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:41.097551 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:41.097574 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:41.097586 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:41.097593 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:41.099851 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:41.596971 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:41.596996 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:41.597005 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:41.597008 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:41.599346 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.097228 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:42.097253 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:42.097262 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:42.097268 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:42.099597 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.597496 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:42.597522 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:42.597537 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:42.597542 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:42.599923 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.600028 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:43.097893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:43.097928 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:43.097940 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:43.097946 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:43.100249 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:43.597079 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:43.597103 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:43.597111 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:43.597115 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:43.599554 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:44.097935 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:44.097963 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:44.097972 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:44.097978 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:44.100650 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:44.597578 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:44.597602 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:44.597611 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:44.597615 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:44.599830 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:45.097892 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:45.097932 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:45.097940 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:45.097960 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:45.100091 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:45.100177 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:45.596937 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:45.596965 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:45.596975 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:45.596982 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:45.599620 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:46.097332 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:46.097359 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:46.097367 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:46.097373 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:46.099777 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:46.597031 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:46.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:46.597068 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:46.597075 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:46.599403 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:47.097731 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:47.097757 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:47.097766 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:47.097769 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:47.100280 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:47.100377 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:47.597123 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:47.597151 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:47.597170 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:47.597175 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:47.599534 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:48.097336 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:48.097361 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:48.097370 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:48.097374 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:48.099675 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:48.597501 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:48.597534 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:48.597547 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:48.597560 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:48.600236 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.097710 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:49.097738 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:49.097747 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:49.097750 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:49.100057 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.596902 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:49.596926 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:49.596935 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:49.596941 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:49.599460 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.599564 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:50.097595 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:50.097620 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:50.097629 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:50.097633 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:50.099825 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:50.597754 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:50.597780 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:50.597789 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:50.597793 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:50.600075 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.097870 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:51.097899 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:51.097909 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:51.097929 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:51.100654 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.596969 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:51.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:51.597006 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:51.597010 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:51.599564 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.599659 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:52.097262 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:52.097289 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:52.097297 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:52.097302 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:52.099885 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:52.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:52.597649 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:52.597657 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:52.597662 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:52.600287 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.097029 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:53.097056 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:53.097064 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:53.097070 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:53.100094 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.597857 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:53.597883 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:53.597892 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:53.597896 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:53.600381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.600486 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:54.097694 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:54.097720 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:54.097733 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:54.097739 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:54.100246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:54.596985 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:54.597015 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:54.597024 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:54.597029 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:54.599531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:55.097645 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:55.097670 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:55.097678 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:55.097682 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:55.100175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:55.596893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:55.596928 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:55.596937 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:55.596942 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:55.599467 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:56.097332 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:56.097359 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:56.097367 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:56.097372 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:56.099838 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:56.099935 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:56.597119 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:56.597143 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:56.597152 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:56.597156 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:56.599329 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:57.097196 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:57.097223 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:57.097233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:57.097238 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:57.099869 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:57.597766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:57.597794 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:57.597806 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:57.597810 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:57.600130 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.096957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:58.096983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:58.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:58.096999 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:58.099238 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.597087 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:58.597112 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:58.597126 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:58.597132 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:58.599330 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.599420 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:59.097878 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:59.097909 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:59.097921 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:59.097927 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:59.100274 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:59.597081 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:59.597111 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:59.597122 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:59.597127 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:59.599692 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:00.097700 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:00.097709 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:00.097712 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:00.100091 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.597900 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:00.597929 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:00.597940 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:00.597946 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:00.600276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.600373 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:01.097002 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:01.097028 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:01.097036 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:01.097042 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:01.099132 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:01.597696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:01.597720 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:01.597729 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:01.597734 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:01.600078 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:02.096932 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:02.096958 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:02.096966 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:02.096971 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:02.099544 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:02.597385 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:02.597411 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:02.597419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:02.597424 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:02.599758 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:03.097724 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:03.097751 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:03.097759 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:03.097763 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:03.099959 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:03.100080 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:03.596849 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:03.596874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:03.596883 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:03.596887 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:03.599335 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:04.097559 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:04.097583 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:04.097591 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:04.097596 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:04.099995 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:04.597777 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:04.597812 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:04.597832 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:04.597838 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:04.600226 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.097053 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:05.097079 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:05.097088 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:05.097092 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:05.099413 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.597132 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:05.597157 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:05.597175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:05.597181 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:05.599523 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.599615 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:06.097257 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:06.097285 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:06.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:06.097298 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:06.099686 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:06.597194 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:06.597218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:06.597233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:06.597237 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:06.599753 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:07.097514 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:07.097540 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:07.097548 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:07.097555 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:07.100208 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:07.596890 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:07.596917 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:07.596926 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:07.596929 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:07.599139 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:08.096999 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:08.097025 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:08.097034 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:08.097038 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:08.099440 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:08.099538 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:08.597199 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:08.597225 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:08.597233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:08.597236 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:08.599496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:09.096957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:09.096982 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:09.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:09.096995 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:09.099328 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:09.597143 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:09.597166 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:09.597175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:09.597187 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:09.599350 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:10.097206 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:10.097231 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:10.097240 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:10.097243 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:10.099687 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:10.099779 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:10.597576 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:10.597599 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:10.597608 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:10.597613 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:10.599844 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:11.097696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:11.097722 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:11.097730 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:11.097735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:11.100237 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:11.597785 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:11.597807 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:11.597816 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:11.597823 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:11.600490 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.097100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:12.097126 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:12.097135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:12.097140 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:12.099612 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.597382 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:12.597416 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:12.597430 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:12.597439 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:12.599678 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.599758 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:13.097501 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:13.097526 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:13.097535 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:13.097540 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:13.099917 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:13.597744 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:13.597770 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:13.597779 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:13.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:13.600202 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:14.097453 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:14.097481 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:14.097491 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:14.097495 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:14.100217 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:14.596880 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:14.596907 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:14.596916 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:14.596921 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:14.599285 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:15.097175 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:15.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:15.097209 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:15.097212 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:15.099276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:15.099364 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:15.597074 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:15.597108 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:15.597120 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:15.597125 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:15.599444 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:16.097331 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:16.097360 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:16.097373 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:16.097383 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:16.099711 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:16.597474 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:16.597502 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:16.597512 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:16.597517 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:16.599821 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:17.097721 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:17.097747 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:17.097762 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:17.097768 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:17.100198 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:17.100276 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:17.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:17.597006 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:17.597014 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:17.597018 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:17.599367 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:18.097273 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:18.097299 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:18.097310 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:18.097314 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:18.099609 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:18.597568 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:18.597593 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:18.597602 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:18.597606 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:18.600731 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:54:19.097140 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:19.097166 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:19.097175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:19.097180 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:19.099397 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:19.597213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:19.597238 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:19.597247 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:19.597252 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:19.599471 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:19.599566 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:20.097477 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:20.097502 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:20.097511 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:20.097515 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:20.099861 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:20.597797 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:20.597825 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:20.597837 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:20.597845 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:20.600174 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.097026 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:21.097053 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:21.097066 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:21.097072 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:21.099500 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.597281 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:21.597304 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:21.597313 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:21.597317 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:21.599496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.599588 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:22.097325 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:22.097355 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:22.097366 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:22.097370 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:22.099812 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:22.597762 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:22.597792 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:22.597804 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:22.597817 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:22.599813 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:54:23.097828 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:23.097858 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:23.097871 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:23.097881 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:23.100396 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:23.597213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:23.597241 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:23.597252 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:23.597258 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:23.599717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:23.599796 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:24.096996 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:24.097021 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:24.097049 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:24.097055 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:24.099311 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:24.597126 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:24.597149 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:24.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:24.597162 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:24.599602 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:25.097695 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:25.097703 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:25.097710 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:25.099822 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.597641 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:25.597667 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:25.597675 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:25.597678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:25.600012 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.600100 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:26.097816 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:26.097842 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:26.097850 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:26.097854 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:26.100489 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:26.597097 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:26.597122 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:26.597132 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:26.597137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:26.599865 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:27.097687 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:27.097714 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:27.097723 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:27.097728 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:27.100355 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:27.597087 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:27.597111 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:27.597124 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:27.597128 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:27.599434 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:28.097160 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:28.097192 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:28.097200 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:28.097205 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:28.099497 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:28.099582 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:28.597237 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:28.597261 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:28.597272 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:28.597278 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:28.599694 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:29.097091 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:29.097118 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:29.097127 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:29.097132 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:29.099540 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:29.597363 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:29.597392 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:29.597405 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:29.597411 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:29.600172 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:30.097121 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:30.097144 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:30.097153 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:30.097157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:30.099513 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:30.099612 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:30.597347 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:30.597371 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:30.597380 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:30.597384 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:30.600156 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:31.096952 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:31.096988 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:31.096997 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:31.097001 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:31.099465 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:31.597116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:31.597143 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:31.597153 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:31.597158 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:31.599567 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:32.097317 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:32.097346 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:32.097358 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:32.097365 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:32.099660 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:32.099757 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:32.597405 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:32.597430 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:32.597439 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:32.597441 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:32.599811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:33.097627 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:33.097653 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:33.097662 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:33.097667 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:33.099982 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:33.597753 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:33.597778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:33.597787 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:33.597792 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:33.600559 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:34.097871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:34.097899 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:34.097912 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:34.097919 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:34.100469 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:34.100556 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:34.597193 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:34.597217 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:34.597226 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:34.597232 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:34.600162 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:35.097109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:35.097135 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:35.097144 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:35.097149 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:35.099576 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:35.597285 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:35.597313 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:35.597326 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:35.597333 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:35.599938 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.096921 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:36.096946 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:36.096954 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:36.096959 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:36.099227 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.597866 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:36.597904 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:36.597913 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:36.597919 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:36.600354 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.600463 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:37.097063 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:37.097090 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:37.097100 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:37.097105 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:37.099379 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:37.597122 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:37.597146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:37.597154 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:37.597158 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:37.599519 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.097366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:38.097393 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:38.097408 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:38.097414 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:38.099965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.597915 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:38.597940 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:38.597949 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:38.597954 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:38.600572 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.600660 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:39.097060 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:39.097087 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:39.097096 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:39.097101 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:39.099507 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:39.597337 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:39.597362 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:39.597371 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:39.597375 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:39.599715 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:40.097688 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:40.097713 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:40.097724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:40.097729 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:40.100033 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:40.596909 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:40.596939 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:40.596951 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:40.596957 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:40.599175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:41.097072 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:41.097099 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:41.097107 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:41.097111 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:41.099460 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:41.099539 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:41.597139 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:41.597165 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:41.597174 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:41.597178 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:41.599709 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:42.097560 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:42.097587 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:42.097595 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:42.097600 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:42.099863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:42.597812 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:42.597845 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:42.597862 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:42.597870 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:42.600230 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:43.096959 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:43.096985 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:43.096994 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:43.096999 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:43.099603 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:43.099685 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:43.597369 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:43.597397 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:43.597407 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:43.597412 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:43.599491 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:44.097845 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:44.097872 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:44.097882 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:44.097886 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:44.100129 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:44.597908 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:44.597935 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:44.597944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:44.597949 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:44.600197 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.097116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:45.097145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:45.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:45.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:45.099461 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.597363 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:45.597392 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:45.597403 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:45.597408 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:45.599811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.599899 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:46.097776 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:46.097801 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:46.097809 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:46.097814 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:46.100355 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:46.597079 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:46.597104 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:46.597112 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:46.597118 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:46.599632 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.097368 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:47.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:47.097423 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:47.097427 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:47.099773 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.597600 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:47.597624 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:47.597632 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:47.597637 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:47.600105 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.600192 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:48.096873 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:48.096905 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:48.096921 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:48.096927 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:48.099178 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:48.596912 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:48.596938 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:48.596945 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:48.596952 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:48.599004 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.097608 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:49.097631 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:49.097641 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:49.097645 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:49.099908 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.597696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:49.597722 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:49.597730 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:49.597735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:49.600131 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.600216 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:50.097068 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:50.097094 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:50.097103 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:50.097108 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:50.099234 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:50.596970 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:50.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:50.597008 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:50.597012 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:50.599499 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.097376 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:51.097404 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:51.097433 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:51.097437 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:51.099811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.597585 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:51.597611 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:51.597620 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:51.597624 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:51.600264 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.600359 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:52.097120 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:52.097146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:52.097155 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:52.097159 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:52.100007 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:52.596856 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:52.596893 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:52.596902 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:52.596908 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:52.599385 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:53.097209 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:53.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:53.097245 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:53.097249 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:53.099552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:53.597353 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:53.597378 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:53.597387 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:53.597396 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:53.599946 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:54.097385 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:54.097410 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:54.097419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:54.097425 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:54.099753 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:54.099849 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:54.597114 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:54.597140 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:54.597152 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:54.597159 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:54.599304 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:55.097077 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:55.097101 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:55.097109 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:55.097116 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:55.099594 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:55.597394 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:55.597430 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:55.597443 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:55.597448 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:55.599922 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:56.097857 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:56.097882 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:56.097891 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:56.097896 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:56.099961 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:56.100052 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:56.597806 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:56.597832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:56.597841 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:56.597846 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:56.600303 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:57.097159 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:57.097187 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:57.097195 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:57.097200 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:57.099508 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:57.597505 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:57.597532 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:57.597541 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:57.597545 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:57.600204 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.097048 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:58.097074 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:58.097082 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:58.097086 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:58.099381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.597205 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:58.597230 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:58.597239 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:58.597245 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:58.599451 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.599546 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:59.097886 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:59.097918 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:59.097931 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:59.097939 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:59.100163 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:59.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:59.597010 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:59.597021 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:59.597026 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:59.599059 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:00.097066 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:00.097091 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:00.097103 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:00.097109 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:00.099359 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:00.597072 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:00.597098 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:00.597107 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:00.597113 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:00.599230 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:01.096958 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:01.096983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:01.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:01.096997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:01.099098 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:01.099184 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:01.596893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:01.596921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:01.596933 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:01.596939 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:01.599452 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:02.097155 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:02.097182 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:02.097191 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:02.097197 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:02.099208 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:02.596931 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:02.596957 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:02.596968 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:02.596973 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:02.598907 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:03.097709 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:03.097736 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:03.097744 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:03.097749 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:03.100088 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:03.100185 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:03.597905 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:03.597933 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:03.597944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:03.597949 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:03.600246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:04.097651 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:04.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:04.097687 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:04.097693 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:04.100045 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:04.597839 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:04.597876 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:04.597885 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:04.597890 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:04.600163 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.097176 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:05.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:05.097210 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:05.097214 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:05.099624 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.597323 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:05.597350 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:05.597360 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:05.597365 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:05.599598 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.599695 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:06.097552 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:06.097582 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:06.097591 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:06.097595 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:06.099900 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:06.597946 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:06.597974 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:06.597982 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:06.597988 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:06.600426 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:07.097279 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:07.097306 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:07.097315 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:07.097320 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:07.099371 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:07.597212 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:07.597236 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:07.597245 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:07.597250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:07.599340 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:08.097240 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:08.097274 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:08.097289 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:08.097296 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:08.099717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:08.099814 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:08.597662 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:08.597688 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:08.597697 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:08.597702 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:08.599709 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:09.097250 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:09.097278 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:09.097289 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:09.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:09.099634 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:09.597565 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:09.597589 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:09.597598 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:09.597603 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:09.599920 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.097101 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:10.097125 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:10.097136 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:10.097141 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:10.099632 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.597582 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:10.597608 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:10.597617 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:10.597623 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:10.599909 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.600015 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:11.097848 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:11.097875 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:11.097884 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:11.097889 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:11.100388 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:11.597033 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:11.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:11.597068 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:11.597073 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:11.599446 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:12.097209 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:12.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:12.097246 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:12.097251 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:12.099596 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:12.597381 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:12.597409 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:12.597419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:12.597425 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:12.599739 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:13.097653 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:13.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:13.097694 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:13.097698 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:13.100085 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:13.100162 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:13.596932 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:13.596960 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:13.596970 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:13.596976 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:13.599364 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:14.097757 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:14.097784 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:14.097793 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:14.097799 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:14.100496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:14.597210 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:14.597235 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:14.597244 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:14.597248 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:14.599610 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:15.097782 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:15.097807 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:15.097819 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:15.097824 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:15.101005 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:55:15.101098 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:15.597806 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:15.597832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:15.597841 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:15.597844 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:15.600361 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:16.097098 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:16.097124 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:16.097133 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:16.097138 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:16.099616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:16.597475 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:16.597501 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:16.597509 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:16.597514 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:16.599989 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.097804 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:17.097832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:17.097842 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:17.097849 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:17.100125 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.597891 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:17.597921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:17.597930 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:17.597934 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:17.600307 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.600400 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:18.097041 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:18.097068 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:18.097076 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:18.097082 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:18.099561 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:18.597301 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:18.597328 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:18.597337 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:18.597341 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:18.599635 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:19.097188 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:19.097214 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:19.097223 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:19.097228 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:19.099493 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:19.597192 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:19.597215 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:19.597224 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:19.597229 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:19.599599 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:20.097639 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:20.097663 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:20.097671 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:20.097675 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:20.099803 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:20.099912 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:20.597725 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:20.597750 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:20.597759 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:20.597764 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:20.600274 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:21.097135 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:21.097164 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:21.097173 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:21.097178 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:21.099615 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:21.597251 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:21.597300 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:21.597309 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:21.597313 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:21.599653 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.097498 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:22.097523 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:22.097536 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:22.097542 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:22.099623 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.597528 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:22.597557 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:22.597565 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:22.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:22.599837 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.599933 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:23.097809 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:23.097835 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:23.097846 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:23.097851 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:23.099889 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:23.597818 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:23.597845 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:23.597858 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:23.597865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:23.599919 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.097248 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:24.097280 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:24.097293 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:24.097299 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:24.099650 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.597564 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:24.597589 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:24.597598 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:24.597603 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:24.600076 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.600182 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:25.097211 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:25.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:25.097246 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:25.097250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:25.099737 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:25.597673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:25.597700 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:25.597711 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:25.597718 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:25.600363 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:26.097116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:26.097145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:26.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:26.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:26.099408 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:26.597105 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:26.597133 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:26.597142 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:26.597147 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:26.599718 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:27.097532 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:27.097559 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:27.097569 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:27.097573 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:27.100132 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:27.100234 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:27.596843 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:27.596866 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:27.596875 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:27.596880 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:27.598858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:28.097716 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:28.097744 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:28.097752 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:28.097759 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:28.100226 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:28.596972 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:28.596999 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:28.597008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:28.597013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:28.599202 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:29.097781 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:29.097804 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:29.097814 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:29.097819 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:29.100259 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:29.100355 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:29.596974 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:29.597007 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:29.597018 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:29.597023 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:29.599234 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:30.097347 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:30.097369 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:30.097379 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:30.097384 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:30.099858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:30.597703 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:30.597732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:30.597742 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:30.597747 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:30.600213 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.096866 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:31.096894 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:31.096910 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:31.096925 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:31.098999 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.596844 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:31.596869 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:31.596877 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:31.596881 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:31.599416 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.599520 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:32.097294 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:32.097320 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:32.097329 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:32.097334 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:32.099664 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:32.597534 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:32.597562 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:32.597573 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:32.597581 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:32.599997 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.097885 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:33.097913 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:33.097925 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:33.097933 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:33.100424 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.597212 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:33.597245 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:33.597256 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:33.597261 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:33.599737 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.599825 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:34.096946 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:34.096977 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:34.096990 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:34.096997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:34.099325 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:34.597051 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:34.597077 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:34.597088 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:34.597094 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:34.599638 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:35.097797 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:35.097822 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:35.097832 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:35.097839 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:35.100270 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:35.597109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:35.597137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:35.597145 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:35.597150 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:35.599542 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:36.097465 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:36.097491 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:36.097500 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:36.097505 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:36.100187 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:36.100290 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:36.596906 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:36.596932 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:36.596944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:36.596950 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:36.599839 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:37.097766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:37.097792 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:37.097801 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:37.097807 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:37.099951 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:37.597950 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:37.597979 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:37.597989 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:37.597993 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:37.600410 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.097271 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:38.097298 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:38.097306 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:38.097311 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:38.099663 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.597601 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:38.597627 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:38.597636 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:38.597647 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:38.600447 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.600553 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:39.097748 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:39.097775 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:39.097786 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:39.097794 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:39.100150 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:39.596990 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:39.597019 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:39.597028 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:39.597032 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:39.599406 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:40.097366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:40.097396 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:40.097409 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:40.097416 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:40.099965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:40.597743 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:40.597771 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:40.597780 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:40.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:40.600273 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:41.096973 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:41.096997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:41.097006 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:41.097013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:41.099218 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:41.099337 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:41.596871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:41.596897 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:41.596908 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:41.596913 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:41.599017 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:42.097855 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:42.097889 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:42.097899 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:42.097905 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:42.101284 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:55:42.596957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:42.596996 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:42.597008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:42.597016 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:42.599231 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:43.097007 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:43.097034 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:43.097046 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:43.097051 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:43.099362 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:43.099452 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:43.597120 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:43.597147 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:43.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:43.597164 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:43.599396 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:44.097698 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:44.097725 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:44.097734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:44.097738 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:44.099914 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:44.597690 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:44.597715 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:44.597724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:44.597729 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:44.600159 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.097089 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:45.097112 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:45.097121 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:45.097125 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:45.099361 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.596975 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:45.597002 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:45.597010 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:45.597014 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:45.599569 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.599649 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:46.097457 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:46.097483 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:46.097492 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:46.097497 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:46.099821 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:46.597701 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:46.597727 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:46.597735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:46.597739 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:46.600275 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.097117 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:47.097141 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:47.097150 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:47.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:47.099568 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.597488 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:47.597514 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:47.597522 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:47.597527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:47.599944 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.600100 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:48.096867 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:48.096892 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:48.096908 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:48.096911 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:48.099730 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:48.597476 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:48.597506 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:48.597514 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:48.597520 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:48.599790 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:49.097193 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:49.097219 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:49.097228 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:49.097231 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:49.099213 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:49.596898 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:49.596923 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:49.596931 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:49.596935 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:49.599211 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:50.097588 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:50.097612 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:50.097622 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:50.097626 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:50.099587 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:50.099671 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:50.597293 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:50.597326 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:50.597335 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:50.597346 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:50.599755 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:51.097570 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:51.097599 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:51.097608 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:51.097613 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:51.100622 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:51.597436 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:51.597463 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:51.597472 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:51.597477 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:51.599799 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:52.097594 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:52.097621 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:52.097631 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:52.097635 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:52.100149 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:52.100239 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:52.596871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:52.596917 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:52.596927 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:52.596932 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:52.598861 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:53.097658 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:53.097687 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:53.097695 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:53.097701 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:53.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:53.597899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:53.597931 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:53.597939 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:53.597944 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:53.600381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:54.097688 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:54.097715 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:54.097724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:54.097728 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:54.100282 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:54.100365 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:54.597098 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:54.597127 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:54.597135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:54.597139 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:54.599447 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:55.097620 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:55.097648 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:55.097658 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:55.097663 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:55.100052 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:55.596920 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:55.596949 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:55.596957 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:55.596964 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:55.599399 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.097258 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:56.097285 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:56.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:56.097300 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:56.099626 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.597512 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:56.597537 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:56.597546 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:56.597550 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:56.599780 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.599862 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:57.097715 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:57.097744 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:57.097753 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:57.097758 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:57.100249 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:57.597037 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:57.597065 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:57.597073 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:57.597079 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:57.599410 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.097243 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:58.097271 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:58.097281 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:58.097286 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:58.099805 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.597743 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:58.597775 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:58.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:58.597791 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:58.599981 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.600099 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:59.097525 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:59.097554 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:59.097563 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:59.097567 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:59.100128 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:59.596950 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:59.596975 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:59.596983 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:59.596987 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:59.599509 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:00.097582 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:00.097606 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:00.097615 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:00.097620 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:00.099878 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:00.597634 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:00.597660 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:00.597669 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:00.597673 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:00.599960 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:01.097755 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:01.097779 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:01.097788 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:01.097793 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:01.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:01.100191 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:01.597749 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:01.597778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:01.597789 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:01.597799 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:01.600379 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:02.097127 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:02.097163 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:02.097172 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:02.097179 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:02.099347 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:02.597084 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:02.597114 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:02.597122 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:02.597126 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:02.599484 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.097203 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:03.097229 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:03.097244 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:03.097249 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:03.099750 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.597532 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:03.597557 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:03.597565 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:03.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:03.599887 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.599994 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:04.097156 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:04.097182 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:04.097193 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:04.097202 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:04.099543 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:04.597391 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:04.597422 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:04.597434 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:04.597441 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:04.599613 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:05.097696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:05.097719 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:05.097727 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:05.097733 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:05.101649 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:56:05.597340 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:05.597364 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:05.597373 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:05.597379 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:05.599888 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:05.600026 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:06.097634 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:06.097659 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:06.097668 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:06.097672 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:06.099863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:06.597652 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:06.597686 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:06.597701 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:06.597707 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:06.599965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:07.097782 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:07.097812 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:07.097825 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:07.097833 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:07.100367 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:07.597100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:07.597132 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:07.597144 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:07.597151 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:07.599359 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:08.097183 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:08.097225 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:08.097240 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:08.097248 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:08.099618 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:08.099711 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:08.597331 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:08.597358 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:08.597370 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:08.597377 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:08.599820 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:09.097223 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:09.097254 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:09.097264 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:09.097268 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:09.099655 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:09.597538 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:09.597562 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:09.597570 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:09.597576 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:09.599815 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:10.097831 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:10.097853 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:10.097861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:10.097865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:10.100242 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:10.100337 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:10.597109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:10.597137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:10.597146 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:10.597152 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:10.600167 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:11.097037 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:11.097061 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:11.097070 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:11.097076 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:11.099474 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:11.597114 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:11.597141 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:11.597150 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:11.597155 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:11.599707 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:12.097023 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:12.097048 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:12.097056 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:12.097061 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:12.099277 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:12.099371 1221070 node_ready.go:38] duration metric: took 4m0.002706246s for node "ha-290859-m02" to be "Ready" ...
	I0414 14:56:12.101227 1221070 out.go:201] 
	W0414 14:56:12.102352 1221070 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0414 14:56:12.102371 1221070 out.go:270] * 
	W0414 14:56:12.103364 1221070 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:56:12.104737 1221070 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea9e85492cab1       6e38f40d628db       4 minutes ago       Running             storage-provisioner       2                   22012253a39e5       storage-provisioner
	6def8b5e81c3c       8c811b4aec35f       4 minutes ago       Running             busybox                   1                   8810167e1850b       busybox-58667487b6-t6bgg
	d9bf8cef6e955       c69fa2e9cbf5f       4 minutes ago       Running             coredns                   1                   ae09d1f35f5bb       coredns-668d6bf9bc-wbn4p
	c3c2f4d5fe419       c69fa2e9cbf5f       4 minutes ago       Running             coredns                   1                   8b812c2dfd4e4       coredns-668d6bf9bc-qnl6q
	607041fc2f4ed       df3849d954c98       4 minutes ago       Running             kindnet-cni               1                   4c291c3e02236       kindnet-hm99t
	acc7b3f819a6b       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       1                   22012253a39e5       storage-provisioner
	1c01d86a74294       f1332858868e1       4 minutes ago       Running             kube-proxy                1                   756822c1e13ce       kube-proxy-cg945
	e8658abcccb8b       b6a454c5a800d       4 minutes ago       Running             kube-controller-manager   1                   b171c03689d46       kube-controller-manager-ha-290859
	29445064369e5       d8e673e7c9983       4 minutes ago       Running             kube-scheduler            1                   6e1304537402c       kube-scheduler-ha-290859
	6bb8bbfa1b317       a9e7e6b294baf       4 minutes ago       Running             etcd                      1                   d32dfc76a4340       etcd-ha-290859
	00b109770be1c       85b7a174738ba       4 minutes ago       Running             kube-apiserver            1                   eb5666eae29e1       kube-apiserver-ha-290859
	6dc42b262abf6       6ff023a402a69       4 minutes ago       Running             kube-vip                  0                   c4bd0bf012eaf       kube-vip-ha-290859
	24e6d7cfe7ea4       8c811b4aec35f       26 minutes ago      Exited              busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       26 minutes ago      Exited              coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       26 minutes ago      Exited              coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	2df8ccb8d6ed9       df3849d954c98       26 minutes ago      Exited              kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       26 minutes ago      Exited              kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	8263b35014337       b6a454c5a800d       27 minutes ago      Exited              kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       27 minutes ago      Exited              kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       27 minutes ago      Exited              etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       27 minutes ago      Exited              kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:52:05 ha-290859 containerd[832]: time="2025-04-14T14:52:05.640171349Z" level=info msg="StartContainer for \"6def8b5e81c3c293839e823e7db25b60e0f88e530e87f93ad6439e1ef8967337\" returns successfully"
	Apr 14 14:52:06 ha-290859 containerd[832]: time="2025-04-14T14:52:06.457242635Z" level=info msg="RemoveContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\""
	Apr 14 14:52:06 ha-290859 containerd[832]: time="2025-04-14T14:52:06.469888693Z" level=info msg="RemoveContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.268681775Z" level=info msg="CreateContainer within sandbox \"22012253a39e523fbee6ecb847d27dbb8e09ad98b80aa344f91a171c063bedc5\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.288966764Z" level=info msg="CreateContainer within sandbox \"22012253a39e523fbee6ecb847d27dbb8e09ad98b80aa344f91a171c063bedc5\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\""
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.289554135Z" level=info msg="StartContainer for \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\""
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.339537509Z" level=info msg="StartContainer for \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.225918045Z" level=info msg="RemoveContainer for \"9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.231418188Z" level=info msg="RemoveContainer for \"9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233079029Z" level=info msg="StopPodSandbox for \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233179127Z" level=info msg="TearDown network for sandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233192370Z" level=info msg="StopPodSandbox for \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233840780Z" level=info msg="RemovePodSandbox for \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233892071Z" level=info msg="Forcibly stopping sandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233958310Z" level=info msg="TearDown network for sandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.239481391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.239617741Z" level=info msg="RemovePodSandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240179712Z" level=info msg="StopPodSandbox for \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240271309Z" level=info msg="TearDown network for sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240298864Z" level=info msg="StopPodSandbox for \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240783074Z" level=info msg="RemovePodSandbox for \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240816354Z" level=info msg="Forcibly stopping sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240870755Z" level=info msg="TearDown network for sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.245855866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.245939634Z" level=info msg="RemovePodSandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> coredns [c3c2f4d5fe419392ff3850394da92847c7bcfe369f4d0eddffd38c2a59b41025] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48956 - 43158 "HINFO IN 5542730592661564248.5649616312753148618. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009354162s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1967277509]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30002ms):
	Trace[1967277509]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (14:52:35.692)
	Trace[1967277509]: [30.002592464s] [30.002592464s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1343823812]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1343823812]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (14:52:35.693)
	Trace[1343823812]: [30.00250289s] [30.00250289s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2019019817]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30004ms):
	Trace[2019019817]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (14:52:35.694)
	Trace[2019019817]: [30.004408468s] [30.004408468s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d9bf8cef6e9551ba044bfa75d53bebdabf94a544fb35bcba8ae9dda955c97297] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52958 - 12430 "HINFO IN 2501253073000439982.8063739159986489070. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007070061s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1427080852]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1427080852]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:52:35.691)
	Trace[1427080852]: [30.002092041s] [30.002092041s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1959333545]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1959333545]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:52:35.692)
	Trace[1959333545]: [30.002031471s] [30.002031471s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[910229496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30001ms):
	Trace[910229496]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:52:35.691)
	Trace[910229496]: [30.001488485s] [30.001488485s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:56:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    506c18f2-7f12-4001-8285-917ecaddf63d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         26m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m17s                  kube-proxy       
	  Normal   Starting                 26m                    kube-proxy       
	  Normal   Starting                 26m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26m                    kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26m                    kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26m                    kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26m                    node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal   NodeReady                26m                    kubelet          Node ha-290859 status is now: NodeReady
	  Normal   Starting                 4m35s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m35s (x8 over 4m35s)  kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m35s (x8 over 4m35s)  kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m35s (x7 over 4m35s)  kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m22s                  node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Warning  Rebooted                 4m21s                  kubelet          Node ha-290859 has been rebooted, boot id: 506c18f2-7f12-4001-8285-917ecaddf63d
	
	
	==> dmesg <==
	[Apr14 14:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051074] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036733] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.829588] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.946390] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.551280] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.183144] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +0.054346] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061747] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.177698] systemd-fstab-generator[782]: Ignoring "noauto" option for root device
	[  +0.145567] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.269397] systemd-fstab-generator[824]: Ignoring "noauto" option for root device
	[  +1.160092] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +6.952352] kauditd_printk_skb: 197 callbacks suppressed
	[Apr14 14:52] kauditd_printk_skb: 40 callbacks suppressed
	[ +12.604617] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6bb8bbfa1b317897b9bcc96ba49e7c68f83cc4409dd69a72b86f0448aa2519ea] <==
	{"level":"info","ts":"2025-04-14T14:51:55.652582Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","added-peer-id":"fbb007bab925a598","added-peer-peer-urls":["https://192.168.39.110:2380"]}
	{"level":"info","ts":"2025-04-14T14:51:55.652820Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:51:55.652875Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:51:55.657644Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:55.677815Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T14:51:55.678882Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"fbb007bab925a598","initial-advertise-peer-urls":["https://192.168.39.110:2380"],"listen-peer-urls":["https://192.168.39.110:2380"],"advertise-client-urls":["https://192.168.39.110:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.110:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T14:51:55.678927Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T14:51:55.679144Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2025-04-14T14:51:55.679165Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2025-04-14T14:51:56.795570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.795959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgVoteResp from fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.796217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.796240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.797919Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:ha-290859 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:51:56.798371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:51:56.798558Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:51:56.799556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:51:56.799592Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T14:51:56.800393Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:56.801226Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:51:56.800393Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:56.802399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	{"level":"info","ts":"2025-04-14T14:42:12.425594Z","caller":"traceutil/trace.go:171","msg":"trace[593749251] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1973; }","duration":"103.549581ms","start":"2025-04-14T14:42:12.322004Z","end":"2025-04-14T14:42:12.425554Z","steps":["trace[593749251] 'read index received'  (duration: 102.720139ms)","trace[593749251] 'applied index is now lower than readState.Index'  (duration: 828.805µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:42:12.426144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.759593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:42:12.426196Z","caller":"traceutil/trace.go:171","msg":"trace[257637869] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1805; }","duration":"104.23976ms","start":"2025-04-14T14:42:12.321948Z","end":"2025-04-14T14:42:12.426188Z","steps":["trace[257637869] 'agreement among raft nodes before linearized reading'  (duration: 103.769974ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:42:12.425685Z","caller":"traceutil/trace.go:171","msg":"trace[874985590] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"128.996586ms","start":"2025-04-14T14:42:12.296675Z","end":"2025-04-14T14:42:12.425672Z","steps":["trace[874985590] 'process raft request'  (duration: 128.079961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:42:29.811595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.362023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932452365827166964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:25989634b465d2f3>","response":"size:42"}
	{"level":"info","ts":"2025-04-14T14:44:20.976766Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1495}
	{"level":"info","ts":"2025-04-14T14:44:20.980966Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1495,"took":"3.550898ms","hash":2769383186,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2031616,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-04-14T14:44:20.981013Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2769383186,"revision":1495,"compact-revision":955}
	{"level":"info","ts":"2025-04-14T14:49:20.985771Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2116}
	{"level":"info","ts":"2025-04-14T14:49:20.990796Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":2116,"took":"4.442405ms","hash":2965091083,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2244608,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-04-14T14:49:20.990930Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2965091083,"revision":2116,"compact-revision":1495}
	
	
	==> kernel <==
	 14:56:23 up 4 min,  0 users,  load average: 0.83, 0.38, 0.15
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:48:44.500441       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:48:54.500620       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:48:54.501802       1 main.go:301] handling current node
	I0414 14:48:54.501933       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:48:54.501959       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:04.501654       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:04.501878       1 main.go:301] handling current node
	I0414 14:49:04.502475       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:04.502663       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:14.500855       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:14.500928       1 main.go:301] handling current node
	I0414 14:49:14.500947       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:14.500953       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:24.509280       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:24.509428       1 main.go:301] handling current node
	I0414 14:49:24.509592       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:24.509696       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:34.500704       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:34.500778       1 main.go:301] handling current node
	I0414 14:49:34.500819       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:34.500825       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:44.504658       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:44.504751       1 main.go:301] handling current node
	I0414 14:49:44.504856       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:44.504972       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [607041fc2f4edc17de3caec2d00a9f9b49a94ed154254da72ec094a0f148db36] <==
	I0414 14:55:16.456277       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:26.465697       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:26.465845       1 main.go:301] handling current node
	I0414 14:55:26.465927       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:26.465968       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:36.463752       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:36.463830       1 main.go:301] handling current node
	I0414 14:55:36.463853       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:36.463859       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:46.456585       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:46.457113       1 main.go:301] handling current node
	I0414 14:55:46.457561       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:46.459726       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:56.464186       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:56.464300       1 main.go:301] handling current node
	I0414 14:55:56.464332       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:56.464345       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:56:06.455081       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:56:06.455167       1 main.go:301] handling current node
	I0414 14:56:06.455204       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:56:06.455229       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:56:16.454747       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:56:16.454884       1 main.go:301] handling current node
	I0414 14:56:16.454938       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:56:16.455070       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [00b109770be1cb3d772b7d440ccc36c098a8627e8280f195c263a0a87a6e0c07] <==
	I0414 14:51:57.932933       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0414 14:51:58.014528       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0414 14:51:58.014629       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0414 14:51:58.014535       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 14:51:58.023891       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0414 14:51:58.024459       1 shared_informer.go:320] Caches are synced for configmaps
	I0414 14:51:58.024473       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0414 14:51:58.024547       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0414 14:51:58.025376       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0414 14:51:58.035556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 14:51:58.035771       1 aggregator.go:171] initial CRD sync complete...
	I0414 14:51:58.035828       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:51:58.035845       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:51:58.035857       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:51:58.036008       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 14:51:58.036120       1 policy_source.go:240] refreshing policies
	I0414 14:51:58.097914       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0414 14:51:58.101123       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:51:58.918987       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:51:59.963976       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:52:04.263824       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0414 14:52:04.306348       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:52:04.363470       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:52:04.453440       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:52:04.454453       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:42:29.963750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.969981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.002µs"
	I0414 14:42:30.275380       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:30.614411       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:33.964410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-290859-m03"
	I0414 14:42:34.046665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:39.961881       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.191468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-290859-m03"
	I0414 14:42:49.192361       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.201252       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.216690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.679µs"
	I0414 14:42:49.217122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.948µs"
	I0414 14:42:49.230018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.053µs"
	I0414 14:42:52.664944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.387962ms"
	I0414 14:42:52.665652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.546µs"
	I0414 14:42:53.979890       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:43:00.010906       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:46:33.503243       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:47:25.635375       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:49:09.052122       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:49:09.070345       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:49:09.083390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="59.905µs"
	I0414 14:49:09.105070       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="10.887319ms"
	I0414 14:49:09.105381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.135µs"
	I0414 14:49:14.179848       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	
	
	==> kube-controller-manager [e8658abcccb8b10d531ad775050d96f3375e484efcbaba4d5509a7a22f3608a9] <==
	I0414 14:52:01.154050       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:52:01.197460       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:52:01.197682       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 14:52:01.197815       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 14:52:01.207566       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:52:02.153254       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:52:04.272410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.559874ms"
	I0414 14:52:04.273686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="51.226µs"
	I0414 14:52:04.439056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.737014ms"
	I0414 14:52:04.439344       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="242.032µs"
	I0414 14:52:04.459376       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.444236ms"
	I0414 14:52:04.460062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="174.256µs"
	I0414 14:52:06.474796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.379µs"
	I0414 14:52:06.508895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.708µs"
	I0414 14:52:06.532239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="7.280916ms"
	I0414 14:52:06.532571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="115.282µs"
	I0414 14:52:38.517073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.719998ms"
	I0414 14:52:38.517449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="101.016µs"
	I0414 14:52:38.546449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.225146ms"
	I0414 14:52:38.546575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.763µs"
	I0414 14:56:15.487465       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:56:15.503080       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:56:15.536625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="25.061691ms"
	I0414 14:56:15.546233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.560251ms"
	I0414 14:56:15.546295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="27.858µs"
	
	
	==> kube-proxy [1c01d86a74294bbfd5f487ec85ffc0f35cc4b979ad90c940eea5a17a8e5f46fb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:52:05.724966       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:52:05.743076       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:52:05.743397       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:52:05.784686       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:52:05.784731       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:52:05.784755       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:52:05.786929       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:52:05.787617       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:52:05.787645       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:52:05.789983       1 config.go:199] "Starting service config controller"
	I0414 14:52:05.790536       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:52:05.791108       1 config.go:329] "Starting node config controller"
	I0414 14:52:05.791131       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:52:05.794555       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:52:05.796335       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:52:05.891275       1 shared_informer.go:320] Caches are synced for service config
	I0414 14:52:05.891550       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:52:05.901825       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [29445064369e58250458efcfeed9a28e6da75ce4bcb6f15c9e58844eb1ba811e] <==
	I0414 14:51:55.842470       1 serving.go:386] Generated self-signed cert in-memory
	W0414 14:51:57.981716       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 14:51:57.981805       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 14:51:57.981829       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 14:51:57.981840       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 14:51:58.035351       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 14:51:58.035404       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:51:58.038565       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 14:51:58.038986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 14:51:58.039147       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 14:51:58.039434       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 14:51:58.140699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:52:06 ha-290859 kubelet[906]: I0414 14:52:06.454237     906 scope.go:117] "RemoveContainer" containerID="922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b"
	Apr 14 14:52:06 ha-290859 kubelet[906]: I0414 14:52:06.455356     906 scope.go:117] "RemoveContainer" containerID="acc7b3f819a6b9fa74f5e5423aac252faa39c9dec24306ff130436d9a722188a"
	Apr 14 14:52:06 ha-290859 kubelet[906]: E0414 14:52:06.455566     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a98bb55f-5a73-4436-82eb-ae7534928039)\"" pod="kube-system/storage-provisioner" podUID="a98bb55f-5a73-4436-82eb-ae7534928039"
	Apr 14 14:52:17 ha-290859 kubelet[906]: I0414 14:52:17.265870     906 scope.go:117] "RemoveContainer" containerID="acc7b3f819a6b9fa74f5e5423aac252faa39c9dec24306ff130436d9a722188a"
	Apr 14 14:52:48 ha-290859 kubelet[906]: I0414 14:52:48.224225     906 scope.go:117] "RemoveContainer" containerID="9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d"
	Apr 14 14:52:48 ha-290859 kubelet[906]: E0414 14:52:48.281657     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:52:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:52:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:52:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:52:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:53:48 ha-290859 kubelet[906]: E0414 14:53:48.279850     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:53:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:53:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:53:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:53:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:54:48 ha-290859 kubelet[906]: E0414 14:54:48.287249     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:54:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:54:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:54:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:54:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:55:48 ha-290859 kubelet[906]: E0414 14:55:48.279366     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:55:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:55:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:55:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:55:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-bfghg busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-bfghg busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-bfghg busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-bfghg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6l76h (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-6l76h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  9s    default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	
	
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  4m23s (x2 over 4m26s)  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  15m (x3 over 26m)      default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  13m (x2 over 13m)      default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  7m58s (x3 over 13m)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  7m4s (x2 over 7m15s)   default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (9.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-290859" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-290859\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-290859\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"D
ockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.2\",\"ClusterName\":\"ha-290859\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"containerd\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.110\",\"Port\":8443,\"KubernetesVersion\":\"v1.3
2.2\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.111\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.2\",\"ContainerRuntime\":\"containerd\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provis
ioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\
",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.500432195s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node stop m02 -v=7         | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node start m02 -v=7        | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:43 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-290859 -v=7               | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:48 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-290859 -v=7                    | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:48 UTC | 14 Apr 25 14:51 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-290859 --wait=true -v=7        | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:51 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-290859                    | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:56 UTC |                     |
	| node    | ha-290859 node delete m03 -v=7       | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:56 UTC | 14 Apr 25 14:56 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:51:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:51:24.924385 1221070 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:51:24.924621 1221070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:51:24.924629 1221070 out.go:358] Setting ErrFile to fd 2...
	I0414 14:51:24.924633 1221070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:51:24.924808 1221070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:51:24.925345 1221070 out.go:352] Setting JSON to false
	I0414 14:51:24.926340 1221070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":23628,"bootTime":1744618657,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:51:24.926457 1221070 start.go:139] virtualization: kvm guest
	I0414 14:51:24.928287 1221070 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:51:24.929459 1221070 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:51:24.929469 1221070 notify.go:220] Checking for updates...
	I0414 14:51:24.931737 1221070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:51:24.933068 1221070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:24.934102 1221070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:51:24.935103 1221070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:51:24.936089 1221070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:51:24.937496 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:24.937602 1221070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:51:24.938128 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:24.938198 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:24.954244 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0414 14:51:24.954880 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:24.955464 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:24.955489 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:24.955900 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:24.956117 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:24.990242 1221070 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:51:24.991319 1221070 start.go:297] selected driver: kvm2
	I0414 14:51:24.991332 1221070 start.go:901] validating driver "kvm2" against &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-29
0859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:24.991491 1221070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:51:24.991827 1221070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:51:24.991902 1221070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:51:25.007424 1221070 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:51:25.008082 1221070 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:51:25.008124 1221070 cni.go:84] Creating CNI manager for ""
	I0414 14:51:25.008189 1221070 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0414 14:51:25.008244 1221070 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fal
se kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:25.008400 1221070 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:51:25.010019 1221070 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:51:25.011347 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:25.011399 1221070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:51:25.011409 1221070 cache.go:56] Caching tarball of preloaded images
	I0414 14:51:25.011488 1221070 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:51:25.011498 1221070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:51:25.011617 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:25.011799 1221070 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:51:25.011840 1221070 start.go:364] duration metric: took 23.649µs to acquireMachinesLock for "ha-290859"
	I0414 14:51:25.011855 1221070 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:51:25.011862 1221070 fix.go:54] fixHost starting: 
	I0414 14:51:25.012121 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:25.012156 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:25.026599 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0414 14:51:25.027122 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:25.027660 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:25.027688 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:25.028011 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:25.028229 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:25.028380 1221070 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:51:25.030231 1221070 fix.go:112] recreateIfNeeded on ha-290859: state=Stopped err=<nil>
	I0414 14:51:25.030265 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	W0414 14:51:25.030457 1221070 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:51:25.032663 1221070 out.go:177] * Restarting existing kvm2 VM for "ha-290859" ...
	I0414 14:51:25.033815 1221070 main.go:141] libmachine: (ha-290859) Calling .Start
	I0414 14:51:25.034026 1221070 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:51:25.034048 1221070 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:51:25.034729 1221070 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:51:25.035067 1221070 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:51:25.035424 1221070 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:51:25.036088 1221070 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:51:26.234459 1221070 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:51:26.235587 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.236072 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.236210 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.236086 1221099 retry.go:31] will retry after 280.740636ms: waiting for domain to come up
	I0414 14:51:26.518687 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.519197 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.519215 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.519169 1221099 retry.go:31] will retry after 243.427688ms: waiting for domain to come up
	I0414 14:51:26.765118 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:26.765534 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:26.765582 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:26.765501 1221099 retry.go:31] will retry after 427.840973ms: waiting for domain to come up
	I0414 14:51:27.195132 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:27.195585 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:27.195651 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:27.195569 1221099 retry.go:31] will retry after 469.259994ms: waiting for domain to come up
	I0414 14:51:27.666308 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:27.666685 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:27.666712 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:27.666664 1221099 retry.go:31] will retry after 657.912219ms: waiting for domain to come up
	I0414 14:51:28.326528 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:28.326927 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:28.326955 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:28.326878 1221099 retry.go:31] will retry after 750.684746ms: waiting for domain to come up
	I0414 14:51:29.078742 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:29.079136 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:29.079161 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:29.079097 1221099 retry.go:31] will retry after 1.04198738s: waiting for domain to come up
	I0414 14:51:30.122400 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:30.122774 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:30.122798 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:30.122735 1221099 retry.go:31] will retry after 1.397183101s: waiting for domain to come up
	I0414 14:51:31.522268 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:31.522683 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:31.522709 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:31.522652 1221099 retry.go:31] will retry after 1.778850774s: waiting for domain to come up
	I0414 14:51:33.303491 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:33.303831 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:33.303859 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:33.303809 1221099 retry.go:31] will retry after 2.116605484s: waiting for domain to come up
	I0414 14:51:35.422345 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:35.422804 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:35.422863 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:35.422810 1221099 retry.go:31] will retry after 2.695384495s: waiting for domain to come up
	I0414 14:51:38.120436 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:38.120841 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:38.120862 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:38.120804 1221099 retry.go:31] will retry after 2.291586599s: waiting for domain to come up
	I0414 14:51:40.414425 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:40.414781 1221070 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:51:40.414804 1221070 main.go:141] libmachine: (ha-290859) DBG | I0414 14:51:40.414750 1221099 retry.go:31] will retry after 4.202133346s: waiting for domain to come up
	I0414 14:51:44.622185 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.622671 1221070 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:51:44.622701 1221070 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:51:44.622714 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.623272 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.623307 1221070 main.go:141] libmachine: (ha-290859) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"}
	I0414 14:51:44.623333 1221070 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:51:44.623346 1221070 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:51:44.623353 1221070 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:51:44.625584 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.625894 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.625919 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.626118 1221070 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:51:44.626160 1221070 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:51:44.626206 1221070 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:51:44.626228 1221070 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:51:44.626236 1221070 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:51:44.746948 1221070 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:51:44.747341 1221070 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:51:44.748066 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:44.750502 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.750990 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.751020 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.751318 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:44.751530 1221070 machine.go:93] provisionDockerMachine start ...
	I0414 14:51:44.751557 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:44.751774 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.754154 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.754523 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.754549 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.754732 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.754917 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.755086 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.755209 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.755372 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.755592 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.755609 1221070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:51:44.859385 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:51:44.859420 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:44.859703 1221070 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:51:44.859733 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:44.859976 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.862591 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.862947 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.862982 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.863100 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.863336 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.863508 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.863682 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.863853 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.864206 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.864235 1221070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:51:44.980307 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:51:44.980345 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:44.983477 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.983889 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:44.983935 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:44.984061 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:44.984280 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.984453 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:44.984640 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:44.984799 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:44.985038 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:44.985053 1221070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:51:45.095107 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:51:45.095137 1221070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:51:45.095159 1221070 buildroot.go:174] setting up certificates
	I0414 14:51:45.095170 1221070 provision.go:84] configureAuth start
	I0414 14:51:45.095189 1221070 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:51:45.095535 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:45.098271 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.098658 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.098683 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.098857 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.101319 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.101590 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.101614 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.101756 1221070 provision.go:143] copyHostCerts
	I0414 14:51:45.101791 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:51:45.101823 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:51:45.101841 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:51:45.101907 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:51:45.101983 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:51:45.102001 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:51:45.102007 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:51:45.102032 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:51:45.102075 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:51:45.102097 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:51:45.102103 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:51:45.102122 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:51:45.102165 1221070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:51:45.257877 1221070 provision.go:177] copyRemoteCerts
	I0414 14:51:45.257960 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:51:45.257996 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.261081 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.261410 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.261440 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.261666 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.261911 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.262125 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.262285 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.340876 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:51:45.340975 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0414 14:51:45.362634 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:51:45.362694 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:51:45.383617 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:51:45.383700 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:51:45.404718 1221070 provision.go:87] duration metric: took 309.531359ms to configureAuth
	I0414 14:51:45.404750 1221070 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:51:45.405030 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:45.405049 1221070 machine.go:96] duration metric: took 653.506288ms to provisionDockerMachine
	I0414 14:51:45.405057 1221070 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:51:45.405066 1221070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:51:45.405099 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.405452 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:51:45.405481 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.408299 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.408642 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.408670 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.408811 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.408995 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.409115 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.409248 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.489101 1221070 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:51:45.493122 1221070 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:51:45.493155 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:51:45.493230 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:51:45.493340 1221070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:51:45.493354 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:51:45.493471 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:51:45.502327 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:51:45.523422 1221070 start.go:296] duration metric: took 118.348669ms for postStartSetup
	I0414 14:51:45.523473 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.523812 1221070 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:51:45.523846 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.526608 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.526952 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.526984 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.527122 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.527317 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.527485 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.527636 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.609005 1221070 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:51:45.609116 1221070 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:51:45.667143 1221070 fix.go:56] duration metric: took 20.655266779s for fixHost
	I0414 14:51:45.667202 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.670139 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.670591 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.670620 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.670836 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.671137 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.671338 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.671522 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.671692 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:51:45.671935 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:51:45.671948 1221070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:51:45.775787 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642305.752586107
	
	I0414 14:51:45.775819 1221070 fix.go:216] guest clock: 1744642305.752586107
	I0414 14:51:45.775848 1221070 fix.go:229] Guest: 2025-04-14 14:51:45.752586107 +0000 UTC Remote: 2025-04-14 14:51:45.667180128 +0000 UTC m=+20.782398303 (delta=85.405979ms)
	I0414 14:51:45.775882 1221070 fix.go:200] guest clock delta is within tolerance: 85.405979ms
	I0414 14:51:45.775900 1221070 start.go:83] releasing machines lock for "ha-290859", held for 20.764045917s
	I0414 14:51:45.775923 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.776216 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:45.778889 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.779306 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.779339 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.779531 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780063 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780265 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:51:45.780372 1221070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:51:45.780417 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.780527 1221070 ssh_runner.go:195] Run: cat /version.json
	I0414 14:51:45.780554 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:51:45.783291 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783315 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783676 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.783718 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.783821 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.783864 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:45.783889 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:45.784002 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:51:45.784123 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.784177 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:51:45.784299 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.784385 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:51:45.784475 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.784588 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:51:45.860084 1221070 ssh_runner.go:195] Run: systemctl --version
	I0414 14:51:45.888174 1221070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:51:45.893495 1221070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:51:45.893571 1221070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:51:45.908348 1221070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:51:45.908375 1221070 start.go:495] detecting cgroup driver to use...
	I0414 14:51:45.908446 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:51:45.935942 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:51:45.948409 1221070 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:51:45.948475 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:51:45.960942 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:51:45.974488 1221070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:51:46.086503 1221070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:51:46.230317 1221070 docker.go:233] disabling docker service ...
	I0414 14:51:46.230381 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:51:46.244297 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:51:46.256626 1221070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:51:46.408783 1221070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:51:46.531425 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:51:46.544279 1221070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:51:46.561206 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:51:46.570536 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:51:46.579933 1221070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:51:46.579987 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:51:46.589083 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:51:46.598516 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:51:46.608502 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:51:46.618260 1221070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:51:46.628002 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:51:46.637979 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:51:46.647708 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:51:46.657465 1221070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:51:46.666456 1221070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:51:46.666506 1221070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:51:46.679179 1221070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:51:46.688058 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:51:46.803994 1221070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:51:46.830741 1221070 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:51:46.830851 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:51:46.834666 1221070 retry.go:31] will retry after 684.331118ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:51:47.519413 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:51:47.524753 1221070 start.go:563] Will wait 60s for crictl version
	I0414 14:51:47.524814 1221070 ssh_runner.go:195] Run: which crictl
	I0414 14:51:47.528401 1221070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:51:47.567610 1221070 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:51:47.567684 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:51:47.592654 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:51:47.616410 1221070 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:51:47.617662 1221070 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:51:47.620124 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:47.620497 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:51:47.620523 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:51:47.620761 1221070 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:51:47.624661 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:51:47.636875 1221070 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:51:47.637062 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:47.637127 1221070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:51:47.668962 1221070 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:51:47.668993 1221070 containerd.go:534] Images already preloaded, skipping extraction
	I0414 14:51:47.669051 1221070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:51:47.700719 1221070 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:51:47.700748 1221070 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:51:47.700756 1221070 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:51:47.700911 1221070 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:51:47.701015 1221070 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:51:47.733009 1221070 cni.go:84] Creating CNI manager for ""
	I0414 14:51:47.733034 1221070 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0414 14:51:47.733058 1221070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:51:47.733086 1221070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:51:47.733246 1221070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:51:47.733266 1221070 kube-vip.go:115] generating kube-vip config ...
	I0414 14:51:47.733322 1221070 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:51:47.749704 1221070 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:51:47.749841 1221070 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:51:47.749916 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:51:47.759441 1221070 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:51:47.759517 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:51:47.768745 1221070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:51:47.784598 1221070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:51:47.800512 1221070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:51:47.816194 1221070 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:51:47.832579 1221070 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:51:47.836561 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:51:47.848464 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:51:47.961061 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:51:47.977110 1221070 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:51:47.977148 1221070 certs.go:194] generating shared ca certs ...
	I0414 14:51:47.977165 1221070 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:47.977358 1221070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:51:47.977426 1221070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:51:47.977447 1221070 certs.go:256] generating profile certs ...
	I0414 14:51:47.977567 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:51:47.977595 1221070 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d
	I0414 14:51:47.977626 1221070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.111 192.168.39.254]
	I0414 14:51:48.116172 1221070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d ...
	I0414 14:51:48.116203 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d: {Name:mk9edc6f7524dc9ba3b3dee538c59fbd77ccd148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.116397 1221070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d ...
	I0414 14:51:48.116412 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d: {Name:mk18dc0fd4ba99bfeaa95fae1a08a91f3d1054da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.116516 1221070 certs.go:381] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt.c955092d -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt
	I0414 14:51:48.116679 1221070 certs.go:385] copying /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d -> /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key
	I0414 14:51:48.116822 1221070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:51:48.116845 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:51:48.116863 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:51:48.116876 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:51:48.116888 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:51:48.116898 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:51:48.116907 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:51:48.116916 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:51:48.116925 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:51:48.116971 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:51:48.117008 1221070 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:51:48.117018 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:51:48.117040 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:51:48.117066 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:51:48.117086 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:51:48.117120 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:51:48.117150 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.117163 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.117173 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.117829 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:51:48.149051 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:51:48.177053 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:51:48.209173 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:51:48.253240 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:51:48.287575 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:51:48.318676 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:51:48.341473 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:51:48.364366 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:51:48.392240 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:51:48.414262 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:51:48.435434 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:51:48.451391 1221070 ssh_runner.go:195] Run: openssl version
	I0414 14:51:48.456643 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:51:48.467055 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.471094 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.471167 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:51:48.476620 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:51:48.487041 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:51:48.497119 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.501253 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.501303 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:51:48.506464 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:51:48.516670 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:51:48.526675 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.530724 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.530790 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:51:48.536779 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:51:48.547496 1221070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:51:48.551752 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 14:51:48.557436 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 14:51:48.563312 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 14:51:48.569039 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 14:51:48.575033 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 14:51:48.580579 1221070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 14:51:48.586320 1221070 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.112 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:51:48.586432 1221070 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:51:48.586516 1221070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:51:48.621007 1221070 cri.go:89] found id: "731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0"
	I0414 14:51:48.621036 1221070 cri.go:89] found id: "0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f"
	I0414 14:51:48.621043 1221070 cri.go:89] found id: "922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b"
	I0414 14:51:48.621047 1221070 cri.go:89] found id: "2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d"
	I0414 14:51:48.621051 1221070 cri.go:89] found id: "e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2"
	I0414 14:51:48.621056 1221070 cri.go:89] found id: "9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d"
	I0414 14:51:48.621059 1221070 cri.go:89] found id: "8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847"
	I0414 14:51:48.621063 1221070 cri.go:89] found id: "3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3"
	I0414 14:51:48.621066 1221070 cri.go:89] found id: "b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c"
	I0414 14:51:48.621076 1221070 cri.go:89] found id: "341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b"
	I0414 14:51:48.621080 1221070 cri.go:89] found id: ""
	I0414 14:51:48.621136 1221070 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0414 14:51:48.634683 1221070 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:51:48Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0414 14:51:48.634779 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:51:48.644649 1221070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 14:51:48.644668 1221070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 14:51:48.644716 1221070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 14:51:48.653466 1221070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:51:48.653918 1221070 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-290859" does not appear in /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.654026 1221070 kubeconfig.go:62] /home/jenkins/minikube-integration/20512-1196368/kubeconfig needs updating (will repair): [kubeconfig missing "ha-290859" cluster setting kubeconfig missing "ha-290859" context setting]
	I0414 14:51:48.654307 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.654727 1221070 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.654871 1221070 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.110:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:51:48.655325 1221070 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:51:48.655343 1221070 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:51:48.655349 1221070 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:51:48.655355 1221070 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:51:48.655383 1221070 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:51:48.655782 1221070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 14:51:48.666379 1221070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0414 14:51:48.666416 1221070 kubeadm.go:597] duration metric: took 21.742146ms to restartPrimaryControlPlane
	I0414 14:51:48.666430 1221070 kubeadm.go:394] duration metric: took 80.118757ms to StartCluster
	I0414 14:51:48.666454 1221070 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.666542 1221070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:51:48.667357 1221070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:51:48.667681 1221070 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:51:48.667715 1221070 start.go:241] waiting for startup goroutines ...
	I0414 14:51:48.667737 1221070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:51:48.667972 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:48.670730 1221070 out.go:177] * Enabled addons: 
	I0414 14:51:48.671774 1221070 addons.go:514] duration metric: took 4.043718ms for enable addons: enabled=[]
	I0414 14:51:48.671816 1221070 start.go:246] waiting for cluster config update ...
	I0414 14:51:48.671833 1221070 start.go:255] writing updated cluster config ...
	I0414 14:51:48.673542 1221070 out.go:201] 
	I0414 14:51:48.674918 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:51:48.675012 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:48.676439 1221070 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:51:48.677470 1221070 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:51:48.677501 1221070 cache.go:56] Caching tarball of preloaded images
	I0414 14:51:48.677610 1221070 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:51:48.677625 1221070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:51:48.677734 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:51:48.677945 1221070 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:51:48.677999 1221070 start.go:364] duration metric: took 29.352µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:51:48.678015 1221070 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:51:48.678023 1221070 fix.go:54] fixHost starting: m02
	I0414 14:51:48.678300 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:51:48.678338 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:51:48.694625 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46149
	I0414 14:51:48.695133 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:51:48.695644 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:51:48.695672 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:51:48.696059 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:51:48.696257 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:51:48.696396 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:51:48.697918 1221070 fix.go:112] recreateIfNeeded on ha-290859-m02: state=Stopped err=<nil>
	I0414 14:51:48.697944 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	W0414 14:51:48.698147 1221070 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:51:48.699709 1221070 out.go:177] * Restarting existing kvm2 VM for "ha-290859-m02" ...
	I0414 14:51:48.700791 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .Start
	I0414 14:51:48.701016 1221070 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:51:48.701037 1221070 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:51:48.701680 1221070 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:51:48.701964 1221070 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:51:48.702320 1221070 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:51:48.703123 1221070 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:51:49.928511 1221070 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:51:49.929302 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:49.929682 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:49.929753 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:49.929668 1221256 retry.go:31] will retry after 213.167481ms: waiting for domain to come up
	I0414 14:51:50.144304 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.144886 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.144914 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.144841 1221256 retry.go:31] will retry after 331.221156ms: waiting for domain to come up
	I0414 14:51:50.477450 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.477938 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.477993 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.477923 1221256 retry.go:31] will retry after 310.58732ms: waiting for domain to come up
	I0414 14:51:50.790523 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:50.791165 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:50.791199 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:50.791085 1221256 retry.go:31] will retry after 545.346683ms: waiting for domain to come up
	I0414 14:51:51.337935 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:51.338399 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:51.338425 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:51.338357 1221256 retry.go:31] will retry after 756.05518ms: waiting for domain to come up
	I0414 14:51:52.096242 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:52.096695 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:52.096730 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:52.096648 1221256 retry.go:31] will retry after 823.090094ms: waiting for domain to come up
	I0414 14:51:52.921657 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:52.922142 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:52.922184 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:52.922101 1221256 retry.go:31] will retry after 970.69668ms: waiting for domain to come up
	I0414 14:51:53.894927 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:53.895561 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:53.895594 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:53.895517 1221256 retry.go:31] will retry after 1.032622919s: waiting for domain to come up
	I0414 14:51:54.929442 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:54.929927 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:54.929952 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:54.929923 1221256 retry.go:31] will retry after 1.334812207s: waiting for domain to come up
	I0414 14:51:56.266967 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:56.267482 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:56.267510 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:56.267455 1221256 retry.go:31] will retry after 1.510894415s: waiting for domain to come up
	I0414 14:51:57.780426 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:51:57.780971 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:51:57.781004 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:51:57.780920 1221256 retry.go:31] will retry after 2.39467668s: waiting for domain to come up
	I0414 14:52:00.177702 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:00.178090 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:52:00.178121 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:52:00.178065 1221256 retry.go:31] will retry after 3.552625428s: waiting for domain to come up
	I0414 14:52:03.732281 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:03.732786 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:52:03.732838 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:52:03.732762 1221256 retry.go:31] will retry after 4.321714949s: waiting for domain to come up
	I0414 14:52:08.057427 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.057990 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.058015 1221070 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:52:08.058030 1221070 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:52:08.058568 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.058598 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"}
	I0414 14:52:08.058616 1221070 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:52:08.058624 1221070 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:52:08.058632 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:52:08.061480 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.061822 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.061855 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.062002 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:52:08.062025 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:52:08.062058 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:52:08.062073 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:52:08.062084 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:52:08.183207 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:52:08.183609 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:52:08.184236 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:08.186802 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.187282 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.187322 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.187609 1221070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:52:08.187825 1221070 machine.go:93] provisionDockerMachine start ...
	I0414 14:52:08.187846 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.188131 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.190391 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.190830 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.190855 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.191024 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.191211 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.191410 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.191557 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.191706 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.192061 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.192080 1221070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:52:08.291480 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:52:08.291525 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.291906 1221070 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:52:08.291946 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.292200 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.295446 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.295895 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.295926 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.296203 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.296433 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.296612 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.296787 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.297073 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.297293 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.297305 1221070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:52:08.410482 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:52:08.410517 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.413198 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.413585 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.413621 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.413794 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.414028 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.414223 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.414369 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.414529 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.414731 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.414746 1221070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:52:08.522305 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:52:08.522338 1221070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:52:08.522355 1221070 buildroot.go:174] setting up certificates
	I0414 14:52:08.522368 1221070 provision.go:84] configureAuth start
	I0414 14:52:08.522377 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:52:08.522678 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:08.525718 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.526180 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.526208 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.526396 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.528768 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.529141 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.529174 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.529288 1221070 provision.go:143] copyHostCerts
	I0414 14:52:08.529323 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:52:08.529356 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:52:08.529364 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:52:08.529418 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:52:08.529544 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:52:08.529566 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:52:08.529571 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:52:08.529594 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:52:08.529638 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:52:08.529656 1221070 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:52:08.529663 1221070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:52:08.529681 1221070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:52:08.529727 1221070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:52:08.556497 1221070 provision.go:177] copyRemoteCerts
	I0414 14:52:08.556548 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:52:08.556569 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.559078 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.559480 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.559504 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.559685 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.559875 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.560067 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.560219 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.637398 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:52:08.637469 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:52:08.661142 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:52:08.661219 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:52:08.683109 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:52:08.683191 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 14:52:08.705705 1221070 provision.go:87] duration metric: took 183.321321ms to configureAuth
	I0414 14:52:08.705738 1221070 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:52:08.706026 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:08.706045 1221070 machine.go:96] duration metric: took 518.207609ms to provisionDockerMachine
	I0414 14:52:08.706054 1221070 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:52:08.706063 1221070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:52:08.706087 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.706363 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:52:08.706392 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.709099 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.709429 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.709457 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.709689 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.709903 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.710118 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.710263 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.791281 1221070 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:52:08.795310 1221070 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:52:08.795344 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:52:08.795409 1221070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:52:08.795482 1221070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:52:08.795492 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:52:08.795570 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:52:08.806018 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:52:08.828791 1221070 start.go:296] duration metric: took 122.715902ms for postStartSetup
	I0414 14:52:08.828841 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:08.829192 1221070 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:52:08.829225 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.832093 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.832474 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.832500 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.832687 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.832874 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.833046 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.833191 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:08.914136 1221070 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:52:08.914227 1221070 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:52:08.970338 1221070 fix.go:56] duration metric: took 20.292306098s for fixHost
	I0414 14:52:08.970422 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:08.973148 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.973612 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:08.973662 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:08.973866 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:08.974071 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.974273 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:08.974383 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:08.974544 1221070 main.go:141] libmachine: Using SSH client type: native
	I0414 14:52:08.974752 1221070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:52:08.974761 1221070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:52:09.075896 1221070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642329.038020711
	
	I0414 14:52:09.075916 1221070 fix.go:216] guest clock: 1744642329.038020711
	I0414 14:52:09.075924 1221070 fix.go:229] Guest: 2025-04-14 14:52:09.038020711 +0000 UTC Remote: 2025-04-14 14:52:08.970369466 +0000 UTC m=+44.085587632 (delta=67.651245ms)
	I0414 14:52:09.075939 1221070 fix.go:200] guest clock delta is within tolerance: 67.651245ms
	I0414 14:52:09.075944 1221070 start.go:83] releasing machines lock for "ha-290859-m02", held for 20.397936123s
	I0414 14:52:09.075962 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.076232 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:09.079036 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.079425 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.079456 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.081479 1221070 out.go:177] * Found network options:
	I0414 14:52:09.082752 1221070 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:52:09.084044 1221070 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:52:09.084079 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084689 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084887 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:52:09.084984 1221070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:52:09.085023 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:52:09.085117 1221070 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:52:09.085206 1221070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:52:09.085232 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:52:09.088187 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088476 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088613 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.088643 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088794 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:09.088903 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:09.088928 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:09.088974 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:09.089083 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:52:09.089161 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:09.089227 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:52:09.089297 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:52:09.089336 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:52:09.089483 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:52:09.194292 1221070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:52:09.194439 1221070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:52:09.211568 1221070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:52:09.211600 1221070 start.go:495] detecting cgroup driver to use...
	I0414 14:52:09.211684 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:52:09.239355 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:52:09.252164 1221070 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:52:09.252247 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:52:09.266619 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:52:09.279466 1221070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:52:09.408504 1221070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:52:09.554621 1221070 docker.go:233] disabling docker service ...
	I0414 14:52:09.554705 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:52:09.567849 1221070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:52:09.579882 1221070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:52:09.691627 1221070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:52:09.801979 1221070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:52:09.824437 1221070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:52:09.841408 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:52:09.851062 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:52:09.860777 1221070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:52:09.860826 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:52:09.870133 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:52:09.879955 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:52:09.889567 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:52:09.899405 1221070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:52:09.909754 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:52:09.919673 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:52:09.929572 1221070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:52:09.939053 1221070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:52:09.947490 1221070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:52:09.947546 1221070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:52:09.959627 1221070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:52:09.968379 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:10.086027 1221070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:52:10.118333 1221070 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:52:10.118430 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:52:10.122969 1221070 retry.go:31] will retry after 818.918333ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:52:10.943062 1221070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:52:10.948132 1221070 start.go:563] Will wait 60s for crictl version
	I0414 14:52:10.948196 1221070 ssh_runner.go:195] Run: which crictl
	I0414 14:52:10.952231 1221070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:52:10.988005 1221070 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:52:10.988097 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:52:11.012963 1221070 ssh_runner.go:195] Run: containerd --version
	I0414 14:52:11.038206 1221070 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:52:11.039588 1221070 out.go:177]   - env NO_PROXY=192.168.39.110
	I0414 14:52:11.040724 1221070 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:52:11.043716 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:11.044108 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:59 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:52:11.044129 1221070 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:52:11.044384 1221070 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:52:11.048381 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:52:11.060281 1221070 mustload.go:65] Loading cluster: ha-290859
	I0414 14:52:11.060535 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:11.060920 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:52:11.060972 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:52:11.076673 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0414 14:52:11.077200 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:52:11.077672 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:52:11.077694 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:52:11.078067 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:52:11.078244 1221070 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:52:11.079808 1221070 host.go:66] Checking if "ha-290859" exists ...
	I0414 14:52:11.080127 1221070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:52:11.080174 1221070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:52:11.095417 1221070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37849
	I0414 14:52:11.095844 1221070 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:52:11.096258 1221070 main.go:141] libmachine: Using API Version  1
	I0414 14:52:11.096277 1221070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:52:11.096639 1221070 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:52:11.096826 1221070 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:52:11.096989 1221070 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.111
	I0414 14:52:11.097003 1221070 certs.go:194] generating shared ca certs ...
	I0414 14:52:11.097029 1221070 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:52:11.097193 1221070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:52:11.097269 1221070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:52:11.097285 1221070 certs.go:256] generating profile certs ...
	I0414 14:52:11.097381 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:52:11.097463 1221070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.e4b1b06e
	I0414 14:52:11.097524 1221070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:52:11.097538 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:52:11.097560 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:52:11.097577 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:52:11.097593 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:52:11.097611 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:52:11.097629 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:52:11.097646 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:52:11.097662 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:52:11.097724 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:52:11.097762 1221070 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:52:11.097777 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:52:11.097809 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:52:11.097839 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:52:11.097866 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:52:11.097945 1221070 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:52:11.097992 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.098014 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.098038 1221070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.098070 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:52:11.100966 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:52:11.101386 1221070 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:51:35 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:52:11.101405 1221070 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:52:11.101550 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:52:11.101731 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:52:11.101862 1221070 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:52:11.102010 1221070 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:52:11.175602 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0414 14:52:11.180006 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0414 14:52:11.189968 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0414 14:52:11.193728 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0414 14:52:11.203099 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0414 14:52:11.207009 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0414 14:52:11.216071 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0414 14:52:11.219518 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0414 14:52:11.228688 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0414 14:52:11.232239 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0414 14:52:11.241095 1221070 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0414 14:52:11.244486 1221070 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0414 14:52:11.253441 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:52:11.277269 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:52:11.299096 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:52:11.320223 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:52:11.341633 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:52:11.362868 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:52:11.386598 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:52:11.408609 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:52:11.430516 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:52:11.452312 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:52:11.474971 1221070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:52:11.496336 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0414 14:52:11.511579 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0414 14:52:11.526436 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0414 14:52:11.541220 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0414 14:52:11.556734 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0414 14:52:11.573710 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0414 14:52:11.589103 1221070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0414 14:52:11.604809 1221070 ssh_runner.go:195] Run: openssl version
	I0414 14:52:11.610110 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:52:11.620147 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.624394 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.624454 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:52:11.629850 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:52:11.639862 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:52:11.649796 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.653828 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.653894 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:52:11.659174 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:52:11.669032 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:52:11.678764 1221070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.682817 1221070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.682885 1221070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:52:11.688098 1221070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:52:11.697831 1221070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:52:11.701550 1221070 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:52:11.701601 1221070 kubeadm.go:934] updating node {m02 192.168.39.111 8443 v1.32.2 containerd true true} ...
	I0414 14:52:11.701691 1221070 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:52:11.701720 1221070 kube-vip.go:115] generating kube-vip config ...
	I0414 14:52:11.701774 1221070 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:52:11.717854 1221070 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:52:11.717951 1221070 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:52:11.718009 1221070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:52:11.727618 1221070 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:52:11.727676 1221070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0414 14:52:11.736203 1221070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0414 14:52:11.751774 1221070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:52:11.768120 1221070 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:52:11.783489 1221070 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:52:11.787006 1221070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:52:11.798424 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:11.903985 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:52:11.921547 1221070 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:52:11.921874 1221070 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:52:11.923383 1221070 out.go:177] * Verifying Kubernetes components...
	I0414 14:52:11.924548 1221070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:52:12.079718 1221070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:52:12.096131 1221070 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:52:12.096280 1221070 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0414 14:52:12.096344 1221070 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I0414 14:52:12.096629 1221070 node_ready.go:35] waiting up to 6m0s for node "ha-290859-m02" to be "Ready" ...
	I0414 14:52:12.096770 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:12.096778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:12.096786 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:12.096792 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:12.105014 1221070 round_trippers.go:581] Response Status: 404 Not Found in 8 milliseconds
	I0414 14:52:12.596840 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:12.596864 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:12.596873 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:12.596878 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:12.599193 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:13.096896 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:13.096921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:13.096930 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:13.096935 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:13.099008 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:13.597788 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:13.597813 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:13.597822 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:13.597826 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:13.600141 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:14.097364 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:14.097390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:14.097398 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:14.097401 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:14.099682 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:14.099822 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:14.597362 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:14.597390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:14.597401 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:14.597407 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:14.599923 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:15.096865 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:15.096890 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:15.096898 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:15.096903 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:15.099533 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:15.597246 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:15.597272 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:15.597280 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:15.597285 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:15.599591 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.096978 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:16.097005 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:16.097014 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:16.097019 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:16.099644 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.597351 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:16.597377 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:16.597385 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:16.597389 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:16.599794 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:16.599885 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:17.097583 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:17.097609 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:17.097621 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:17.097630 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:17.099987 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:17.597752 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:17.597777 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:17.597792 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:17.597798 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:17.599966 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.097796 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:18.097830 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:18.097843 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:18.097850 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:18.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.597881 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:18.597906 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:18.597918 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:18.597923 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:18.600349 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:18.600437 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:19.097732 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:19.097758 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:19.097766 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:19.097772 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:19.100346 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:19.597034 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:19.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:19.597074 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:19.597081 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:19.600054 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:20.097051 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:20.097075 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:20.097085 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:20.097091 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:20.099439 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:20.597189 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:20.597218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:20.597230 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:20.597234 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:20.599635 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:21.097052 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:21.097078 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:21.097090 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:21.097095 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:21.099916 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:21.100012 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:21.597682 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:21.597708 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:21.597716 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:21.597722 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:21.600175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:22.097764 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:22.097789 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:22.097798 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:22.097803 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:22.100278 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:22.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:22.597008 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:22.597017 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:22.597021 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:22.599616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.097388 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:23.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:23.097423 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:23.097428 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:23.099818 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:23.597655 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:23.597664 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:23.597669 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:23.600007 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:23.600102 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:24.097112 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:24.097137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:24.097147 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:24.097151 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:24.099644 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:24.597329 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:24.597355 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:24.597363 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:24.597369 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:24.599961 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:25.096893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:25.096919 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:25.096928 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:25.096934 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:25.098708 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:52:25.597473 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:25.597500 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:25.597509 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:25.597514 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:25.600056 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:25.600156 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:26.097355 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:26.097378 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:26.097387 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:26.097391 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:26.099832 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:26.597648 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:26.597673 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:26.597684 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:26.597687 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:26.600271 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:27.096929 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:27.096954 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:27.096963 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:27.096967 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:27.099168 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:27.596858 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:27.596884 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:27.596893 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:27.596899 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:27.599457 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:28.096940 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:28.096964 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:28.096972 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:28.097006 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:28.099432 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:28.099546 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:28.597101 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:28.597126 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:28.597135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:28.597140 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:28.599552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:29.097020 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:29.097048 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:29.097060 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:29.097067 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:29.099638 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:29.597365 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:29.597391 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:29.597399 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:29.597405 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:29.599700 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:30.097686 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:30.097711 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:30.097720 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:30.097726 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:30.099828 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:30.099939 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:30.597659 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:30.597687 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:30.597696 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:30.597701 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:30.600246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:31.097571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:31.097595 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:31.097603 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:31.097608 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:31.100169 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:31.597822 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:31.597851 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:31.597861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:31.597870 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:31.600466 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.097138 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:32.097164 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:32.097173 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:32.097177 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:32.099723 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.597477 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:32.597503 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:32.597511 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:32.597515 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:32.599830 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:32.599932 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:33.097613 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:33.097641 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:33.097649 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:33.097654 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:33.099925 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:33.597289 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:33.597314 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:33.597323 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:33.597327 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:33.599654 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:34.096888 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:34.096919 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:34.096927 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:34.096933 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:34.099431 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:34.596955 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:34.596980 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:34.596989 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:34.596993 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:34.599335 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:35.097100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:35.097123 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:35.097131 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:35.097137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:35.099289 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:35.099382 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:35.596984 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:35.597012 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:35.597021 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:35.597025 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:35.599385 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:36.097705 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:36.097729 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:36.097738 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:36.097743 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:36.100126 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:36.597126 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:36.597155 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:36.597165 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:36.597169 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:36.600643 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:37.097395 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:37.097421 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:37.097430 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:37.097434 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:37.099784 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:37.099868 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:37.597613 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:37.597644 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:37.597653 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:37.597658 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:37.599841 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:38.097708 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:38.097734 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:38.097743 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:38.097746 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:38.100373 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:38.597097 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:38.597124 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:38.597132 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:38.597137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:38.599858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:39.097386 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:39.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:39.097422 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:39.097428 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:39.099969 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:39.100071 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:39.597770 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:39.597797 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:39.597806 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:39.597811 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:39.600350 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:40.097448 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:40.097473 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:40.097482 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:40.097487 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:40.099992 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:40.597766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:40.597794 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:40.597802 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:40.597807 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:40.600235 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:41.097595 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:41.097620 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:41.097628 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:41.097633 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:41.100188 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:41.100291 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:41.597223 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:41.597251 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:41.597259 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:41.597264 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:41.599796 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:42.097539 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:42.097565 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:42.097574 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:42.097578 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:42.099998 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:42.596849 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:42.596874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:42.596882 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:42.596886 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:42.599276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.097056 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:43.097082 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:43.097091 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:43.097095 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:43.099531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.597247 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:43.597271 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:43.597279 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:43.597283 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:43.599641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:43.599742 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:44.097877 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:44.097905 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:44.097916 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:44.097922 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:44.100517 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:44.597248 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:44.597278 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:44.597286 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:44.597290 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:44.599800 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.097824 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:45.097852 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:45.097861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:45.097865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:45.100105 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.597856 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:45.597883 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:45.597892 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:45.597898 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:45.600432 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:45.600532 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:46.097855 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:46.097880 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:46.097888 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:46.097891 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:46.100551 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:46.597726 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:46.597754 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:46.597767 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:46.597772 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:46.600401 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:47.097070 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:47.097095 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:47.097104 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:47.097108 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:47.102860 1221070 round_trippers.go:581] Response Status: 404 Not Found in 5 milliseconds
	I0414 14:52:47.597648 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:47.597673 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:47.597682 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:47.597686 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:47.600174 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:48.096965 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:48.096990 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:48.096998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:48.097002 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:48.099639 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:48.099731 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:48.597371 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:48.597405 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:48.597416 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:48.597421 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:48.599718 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:49.097094 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:49.097133 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:49.097142 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:49.097145 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:49.099888 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:49.597678 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:49.597705 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:49.597713 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:49.597718 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:49.600370 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:50.097228 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:50.097253 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:50.097261 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:50.097266 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:50.100034 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:50.100119 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:50.597914 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:50.597948 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:50.597961 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:50.597967 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:50.601343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:51.097653 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:51.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:51.097690 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:51.097694 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:51.100291 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:51.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:51.597656 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:51.597667 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:51.597675 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:51.606437 1221070 round_trippers.go:581] Response Status: 404 Not Found in 8 milliseconds
	I0414 14:52:52.097142 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:52.097174 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:52.097186 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:52.097203 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:52.100953 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:52:52.101053 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:52.597793 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:52.597822 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:52.597836 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:52.597844 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:52.600495 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:53.097203 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:53.097229 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:53.097238 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:53.097242 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:53.099616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:53.597366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:53.597390 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:53.597399 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:53.597404 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:53.599831 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.097057 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:54.097083 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:54.097092 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:54.097096 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:54.099423 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.596995 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:54.597022 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:54.597031 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:54.597042 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:54.599588 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:54.599693 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:55.097848 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:55.097874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:55.097882 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:55.097887 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:55.100242 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:55.597035 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:55.597062 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:55.597072 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:55.597077 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:55.599583 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.096912 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:56.096939 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:56.096948 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:56.096952 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:56.099376 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.597699 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:56.597725 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:56.597734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:56.597739 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:56.600266 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:56.600543 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:57.097172 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:57.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:57.097209 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:57.097215 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:57.099784 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:57.597610 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:57.597642 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:57.597655 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:57.597663 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:57.599863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.097691 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:58.097721 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:58.097734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:58.097740 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:58.100041 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.597837 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:58.597862 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:58.597870 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:58.597875 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:58.600624 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:58.600730 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:52:59.096948 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:59.096975 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:59.096984 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:59.096989 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:59.099096 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:52:59.597907 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:52:59.597935 1221070 round_trippers.go:476] Request Headers:
	I0414 14:52:59.597947 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:52:59.597953 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:52:59.600401 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:00.097602 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:00.097627 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:00.097636 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:00.097641 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:00.099750 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:00.597486 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:00.597512 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:00.597522 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:00.597527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:00.599885 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:01.097325 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:01.097358 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:01.097371 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:01.097391 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:01.099717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:01.099833 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:01.596958 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:01.596983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:01.596992 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:01.596997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:01.599356 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:02.097071 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:02.097122 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:02.097131 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:02.097138 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:02.099343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:02.597036 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:02.597063 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:02.597071 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:02.597075 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:02.599771 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:03.097565 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:03.097592 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:03.097600 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:03.097604 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:03.099792 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:03.099897 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:03.597552 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:03.597585 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:03.597595 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:03.597599 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:03.600018 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:04.096976 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:04.097001 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:04.097009 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:04.097013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:04.099528 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:04.597239 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:04.597267 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:04.597276 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:04.597283 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:04.599533 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:05.097665 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:05.097691 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:05.097699 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:05.097703 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:05.100338 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:05.100439 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:05.597081 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:05.597106 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:05.597116 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:05.597121 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:05.600398 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:06.097630 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:06.097656 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:06.097665 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:06.097670 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:06.100398 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:06.597714 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:06.597739 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:06.597748 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:06.597752 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:06.600470 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:07.097213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:07.097240 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:07.097250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:07.097253 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:07.099963 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:07.597789 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:07.597816 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:07.597826 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:07.597831 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:07.600855 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:07.600957 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:08.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:08.097701 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:08.097710 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:08.097715 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:08.100645 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:08.597358 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:08.597384 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:08.597393 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:08.597397 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:08.599788 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:09.097393 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:09.097420 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:09.097429 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:09.097434 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:09.099924 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:09.597707 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:09.597732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:09.597742 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:09.597747 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:09.599970 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:10.097178 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:10.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:10.097216 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:10.097221 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:10.099537 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:10.099624 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:10.597236 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:10.597263 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:10.597271 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:10.597275 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:10.599552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:11.097961 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:11.097993 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:11.098008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:11.098016 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:11.100563 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:11.597756 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:11.597782 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:11.597790 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:11.597795 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:11.600339 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:12.097054 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:12.097083 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:12.097093 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:12.097099 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:12.099641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:12.099739 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:12.597376 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:12.597402 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:12.597411 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:12.597417 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:12.599658 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:13.097459 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:13.097484 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:13.097492 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:13.097502 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:13.099810 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:13.597571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:13.597596 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:13.597605 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:13.597609 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:13.600010 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.096947 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:14.096970 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:14.096979 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:14.096990 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:14.099343 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.597063 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:14.597091 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:14.597101 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:14.597105 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:14.599641 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:14.599723 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:15.097631 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:15.097658 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:15.097668 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:15.097682 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:15.100287 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:15.597176 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:15.597202 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:15.597211 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:15.597215 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:15.599531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:16.097711 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:16.097732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:16.097742 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:16.097746 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:16.101211 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:16.597571 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:16.597597 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:16.597606 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:16.597610 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:16.599963 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:16.600075 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:17.097758 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:17.097783 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:17.097792 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:17.097796 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:17.099932 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:17.597691 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:17.597718 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:17.597727 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:17.597733 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:17.600352 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:18.097050 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:18.097078 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:18.097089 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:18.097096 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:18.099428 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:18.597110 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:18.597145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:18.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:18.597166 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:18.599600 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:19.096963 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:19.096987 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:19.096998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:19.097003 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:19.099491 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:19.099580 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:19.597231 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:19.597263 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:19.597276 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:19.597283 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:19.600009 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:20.096886 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:20.096914 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:20.096926 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:20.096932 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:20.099209 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:20.596960 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:20.596986 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:20.596998 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:20.597004 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:20.599960 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.097055 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:21.097077 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:21.097088 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:21.097094 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:21.099402 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.597633 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:21.597662 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:21.597674 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:21.597680 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:21.599894 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:21.600006 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:22.097732 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:22.097762 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:22.097774 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:22.097782 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:22.100319 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:22.597118 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:22.597146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:22.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:22.597163 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:22.599684 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.097462 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:23.097495 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:23.097507 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:23.097513 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:23.100099 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.597914 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:23.597944 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:23.597953 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:23.597959 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:23.600364 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:23.600532 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:24.097607 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:24.097632 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:24.097640 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:24.097644 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:24.100185 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:24.596899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:24.596940 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:24.596951 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:24.596957 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:24.599633 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:25.097761 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:25.097789 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:25.097803 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:25.097808 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:25.100205 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:25.596931 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:25.596958 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:25.596969 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:25.596974 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:25.599583 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:26.097899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:26.097925 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:26.097934 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:26.097938 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:26.100330 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:26.100425 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:26.597539 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:26.597566 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:26.597575 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:26.597580 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:26.600215 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:27.096966 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:27.096998 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:27.097007 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:27.097012 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:27.099631 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:27.597574 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:27.597600 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:27.597607 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:27.597612 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:27.599913 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:28.097869 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:28.097894 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:28.097903 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:28.097906 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:28.100382 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:28.100477 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:28.597225 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:28.597254 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:28.597263 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:28.597269 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:28.599684 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:29.097190 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:29.097218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:29.097229 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:29.097262 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:29.099744 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:29.597605 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:29.597634 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:29.597645 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:29.597652 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:29.600430 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:30.097442 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:30.097468 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:30.097476 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:30.097480 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:30.099457 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:53:30.597276 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:30.597303 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:30.597312 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:30.597316 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:30.599873 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:30.599951 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:31.097106 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:31.097144 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:31.097153 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:31.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:31.099513 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:31.597757 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:31.597783 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:31.597794 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:31.597798 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:31.600463 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:32.097182 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:32.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:32.097215 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:32.097219 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:32.099765 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:32.597512 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:32.597537 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:32.597546 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:32.597551 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:32.599820 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:33.097643 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:33.097666 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:33.097674 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:33.097678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:33.099796 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:33.099884 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:33.597718 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:33.597746 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:33.597755 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:33.597765 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:33.600269 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:34.097517 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:34.097544 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:34.097553 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:34.097558 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:34.100747 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:53:34.597531 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:34.597558 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:34.597567 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:34.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:34.599907 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:35.097832 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:35.097857 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:35.097869 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:35.097875 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:35.100197 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:35.100304 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:35.596881 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:35.596909 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:35.596918 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:35.596921 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:35.599227 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:36.097506 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:36.097528 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:36.097537 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:36.097541 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:36.099779 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:36.597044 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:36.597075 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:36.597086 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:36.597090 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:36.599704 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:37.097488 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:37.097512 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:37.097521 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:37.097527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:37.099413 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:53:37.596959 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:37.596985 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:37.596994 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:37.596998 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:37.599807 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:37.599901 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:38.097637 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:38.097663 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:38.097673 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:38.097678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:38.100336 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:38.597075 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:38.597101 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:38.597110 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:38.597115 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:38.599545 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:39.097005 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:39.097031 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:39.097042 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:39.097047 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:39.099289 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:39.596971 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:39.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:39.597006 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:39.597011 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:39.599228 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:40.097179 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:40.097207 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:40.097215 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:40.097221 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:40.099966 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:40.100061 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:40.597818 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:40.597844 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:40.597854 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:40.597859 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:40.600104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:41.097551 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:41.097574 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:41.097586 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:41.097593 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:41.099851 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:41.596971 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:41.596996 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:41.597005 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:41.597008 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:41.599346 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.097228 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:42.097253 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:42.097262 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:42.097268 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:42.099597 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.597496 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:42.597522 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:42.597537 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:42.597542 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:42.599923 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:42.600028 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:43.097893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:43.097928 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:43.097940 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:43.097946 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:43.100249 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:43.597079 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:43.597103 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:43.597111 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:43.597115 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:43.599554 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:44.097935 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:44.097963 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:44.097972 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:44.097978 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:44.100650 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:44.597578 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:44.597602 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:44.597611 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:44.597615 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:44.599830 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:45.097892 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:45.097932 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:45.097940 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:45.097960 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:45.100091 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:45.100177 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:45.596937 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:45.596965 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:45.596975 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:45.596982 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:45.599620 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:46.097332 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:46.097359 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:46.097367 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:46.097373 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:46.099777 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:46.597031 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:46.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:46.597068 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:46.597075 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:46.599403 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:47.097731 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:47.097757 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:47.097766 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:47.097769 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:47.100280 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:47.100377 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:47.597123 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:47.597151 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:47.597170 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:47.597175 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:47.599534 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:48.097336 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:48.097361 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:48.097370 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:48.097374 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:48.099675 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:48.597501 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:48.597534 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:48.597547 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:48.597560 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:48.600236 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.097710 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:49.097738 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:49.097747 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:49.097750 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:49.100057 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.596902 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:49.596926 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:49.596935 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:49.596941 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:49.599460 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:49.599564 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:50.097595 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:50.097620 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:50.097629 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:50.097633 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:50.099825 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:50.597754 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:50.597780 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:50.597789 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:50.597793 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:50.600075 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.097870 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:51.097899 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:51.097909 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:51.097929 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:51.100654 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.596969 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:51.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:51.597006 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:51.597010 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:51.599564 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:51.599659 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:52.097262 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:52.097289 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:52.097297 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:52.097302 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:52.099885 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:52.597623 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:52.597649 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:52.597657 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:52.597662 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:52.600287 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.097029 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:53.097056 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:53.097064 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:53.097070 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:53.100094 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.597857 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:53.597883 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:53.597892 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:53.597896 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:53.600381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:53.600486 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:54.097694 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:54.097720 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:54.097733 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:54.097739 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:54.100246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:54.596985 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:54.597015 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:54.597024 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:54.597029 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:54.599531 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:55.097645 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:55.097670 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:55.097678 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:55.097682 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:55.100175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:55.596893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:55.596928 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:55.596937 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:55.596942 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:55.599467 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:56.097332 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:56.097359 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:56.097367 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:56.097372 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:56.099838 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:56.099935 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:56.597119 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:56.597143 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:56.597152 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:56.597156 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:56.599329 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:57.097196 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:57.097223 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:57.097233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:57.097238 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:57.099869 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:57.597766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:57.597794 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:57.597806 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:57.597810 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:57.600130 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.096957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:58.096983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:58.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:58.096999 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:58.099238 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.597087 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:58.597112 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:58.597126 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:58.597132 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:58.599330 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:58.599420 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:53:59.097878 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:59.097909 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:59.097921 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:59.097927 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:59.100274 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:53:59.597081 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:53:59.597111 1221070 round_trippers.go:476] Request Headers:
	I0414 14:53:59.597122 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:53:59.597127 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:53:59.599692 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:00.097700 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:00.097709 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:00.097712 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:00.100091 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.597900 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:00.597929 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:00.597940 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:00.597946 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:00.600276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:00.600373 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:01.097002 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:01.097028 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:01.097036 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:01.097042 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:01.099132 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:01.597696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:01.597720 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:01.597729 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:01.597734 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:01.600078 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:02.096932 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:02.096958 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:02.096966 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:02.096971 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:02.099544 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:02.597385 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:02.597411 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:02.597419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:02.597424 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:02.599758 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:03.097724 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:03.097751 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:03.097759 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:03.097763 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:03.099959 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:03.100080 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:03.596849 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:03.596874 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:03.596883 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:03.596887 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:03.599335 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:04.097559 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:04.097583 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:04.097591 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:04.097596 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:04.099995 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:04.597777 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:04.597812 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:04.597832 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:04.597838 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:04.600226 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.097053 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:05.097079 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:05.097088 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:05.097092 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:05.099413 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.597132 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:05.597157 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:05.597175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:05.597181 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:05.599523 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:05.599615 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:06.097257 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:06.097285 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:06.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:06.097298 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:06.099686 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:06.597194 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:06.597218 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:06.597233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:06.597237 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:06.599753 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:07.097514 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:07.097540 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:07.097548 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:07.097555 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:07.100208 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:07.596890 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:07.596917 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:07.596926 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:07.596929 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:07.599139 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:08.096999 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:08.097025 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:08.097034 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:08.097038 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:08.099440 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:08.099538 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:08.597199 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:08.597225 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:08.597233 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:08.597236 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:08.599496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:09.096957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:09.096982 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:09.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:09.096995 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:09.099328 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:09.597143 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:09.597166 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:09.597175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:09.597187 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:09.599350 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:10.097206 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:10.097231 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:10.097240 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:10.097243 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:10.099687 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:10.099779 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:10.597576 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:10.597599 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:10.597608 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:10.597613 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:10.599844 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:11.097696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:11.097722 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:11.097730 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:11.097735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:11.100237 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:11.597785 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:11.597807 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:11.597816 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:11.597823 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:11.600490 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.097100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:12.097126 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:12.097135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:12.097140 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:12.099612 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.597382 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:12.597416 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:12.597430 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:12.597439 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:12.599678 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:12.599758 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:13.097501 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:13.097526 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:13.097535 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:13.097540 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:13.099917 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:13.597744 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:13.597770 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:13.597779 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:13.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:13.600202 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:14.097453 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:14.097481 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:14.097491 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:14.097495 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:14.100217 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:14.596880 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:14.596907 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:14.596916 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:14.596921 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:14.599285 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:15.097175 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:15.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:15.097209 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:15.097212 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:15.099276 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:15.099364 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:15.597074 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:15.597108 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:15.597120 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:15.597125 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:15.599444 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:16.097331 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:16.097360 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:16.097373 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:16.097383 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:16.099711 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:16.597474 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:16.597502 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:16.597512 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:16.597517 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:16.599821 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:17.097721 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:17.097747 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:17.097762 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:17.097768 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:17.100198 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:17.100276 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:17.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:17.597006 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:17.597014 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:17.597018 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:17.599367 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:18.097273 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:18.097299 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:18.097310 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:18.097314 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:18.099609 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:18.597568 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:18.597593 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:18.597602 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:18.597606 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:18.600731 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:54:19.097140 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:19.097166 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:19.097175 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:19.097180 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:19.099397 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:19.597213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:19.597238 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:19.597247 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:19.597252 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:19.599471 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:19.599566 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:20.097477 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:20.097502 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:20.097511 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:20.097515 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:20.099861 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:20.597797 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:20.597825 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:20.597837 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:20.597845 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:20.600174 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.097026 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:21.097053 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:21.097066 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:21.097072 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:21.099500 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.597281 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:21.597304 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:21.597313 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:21.597317 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:21.599496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:21.599588 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:22.097325 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:22.097355 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:22.097366 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:22.097370 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:22.099812 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:22.597762 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:22.597792 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:22.597804 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:22.597817 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:22.599813 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:54:23.097828 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:23.097858 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:23.097871 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:23.097881 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:23.100396 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:23.597213 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:23.597241 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:23.597252 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:23.597258 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:23.599717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:23.599796 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:24.096996 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:24.097021 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:24.097049 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:24.097055 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:24.099311 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:24.597126 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:24.597149 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:24.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:24.597162 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:24.599602 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.097673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:25.097695 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:25.097703 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:25.097710 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:25.099822 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.597641 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:25.597667 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:25.597675 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:25.597678 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:25.600012 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:25.600100 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:26.097816 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:26.097842 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:26.097850 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:26.097854 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:26.100489 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:26.597097 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:26.597122 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:26.597132 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:26.597137 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:26.599865 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:27.097687 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:27.097714 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:27.097723 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:27.097728 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:27.100355 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:27.597087 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:27.597111 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:27.597124 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:27.597128 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:27.599434 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:28.097160 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:28.097192 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:28.097200 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:28.097205 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:28.099497 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:28.099582 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:28.597237 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:28.597261 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:28.597272 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:28.597278 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:28.599694 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:29.097091 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:29.097118 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:29.097127 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:29.097132 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:29.099540 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:29.597363 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:29.597392 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:29.597405 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:29.597411 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:29.600172 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:30.097121 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:30.097144 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:30.097153 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:30.097157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:30.099513 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:30.099612 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:30.597347 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:30.597371 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:30.597380 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:30.597384 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:30.600156 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:31.096952 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:31.096988 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:31.096997 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:31.097001 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:31.099465 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:31.597116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:31.597143 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:31.597153 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:31.597158 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:31.599567 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:32.097317 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:32.097346 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:32.097358 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:32.097365 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:32.099660 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:32.099757 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:32.597405 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:32.597430 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:32.597439 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:32.597441 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:32.599811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:33.097627 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:33.097653 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:33.097662 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:33.097667 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:33.099982 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:33.597753 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:33.597778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:33.597787 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:33.597792 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:33.600559 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:34.097871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:34.097899 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:34.097912 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:34.097919 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:34.100469 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:34.100556 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:34.597193 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:34.597217 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:34.597226 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:34.597232 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:34.600162 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:35.097109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:35.097135 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:35.097144 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:35.097149 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:35.099576 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:35.597285 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:35.597313 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:35.597326 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:35.597333 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:35.599938 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.096921 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:36.096946 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:36.096954 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:36.096959 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:36.099227 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.597866 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:36.597904 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:36.597913 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:36.597919 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:36.600354 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:36.600463 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:37.097063 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:37.097090 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:37.097100 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:37.097105 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:37.099379 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:37.597122 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:37.597146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:37.597154 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:37.597158 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:37.599519 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.097366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:38.097393 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:38.097408 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:38.097414 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:38.099965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.597915 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:38.597940 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:38.597949 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:38.597954 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:38.600572 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:38.600660 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:39.097060 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:39.097087 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:39.097096 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:39.097101 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:39.099507 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:39.597337 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:39.597362 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:39.597371 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:39.597375 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:39.599715 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:40.097688 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:40.097713 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:40.097724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:40.097729 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:40.100033 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:40.596909 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:40.596939 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:40.596951 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:40.596957 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:40.599175 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:41.097072 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:41.097099 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:41.097107 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:41.097111 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:41.099460 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:41.099539 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:41.597139 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:41.597165 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:41.597174 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:41.597178 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:41.599709 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:42.097560 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:42.097587 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:42.097595 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:42.097600 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:42.099863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:42.597812 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:42.597845 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:42.597862 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:42.597870 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:42.600230 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:43.096959 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:43.096985 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:43.096994 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:43.096999 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:43.099603 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:43.099685 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:43.597369 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:43.597397 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:43.597407 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:43.597412 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:43.599491 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:44.097845 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:44.097872 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:44.097882 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:44.097886 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:44.100129 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:44.597908 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:44.597935 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:44.597944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:44.597949 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:44.600197 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.097116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:45.097145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:45.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:45.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:45.099461 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.597363 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:45.597392 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:45.597403 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:45.597408 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:45.599811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:45.599899 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:46.097776 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:46.097801 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:46.097809 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:46.097814 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:46.100355 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:46.597079 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:46.597104 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:46.597112 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:46.597118 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:46.599632 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.097368 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:47.097414 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:47.097423 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:47.097427 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:47.099773 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.597600 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:47.597624 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:47.597632 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:47.597637 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:47.600105 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:47.600192 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:48.096873 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:48.096905 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:48.096921 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:48.096927 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:48.099178 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:48.596912 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:48.596938 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:48.596945 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:48.596952 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:48.599004 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.097608 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:49.097631 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:49.097641 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:49.097645 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:49.099908 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.597696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:49.597722 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:49.597730 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:49.597735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:49.600131 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:49.600216 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:50.097068 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:50.097094 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:50.097103 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:50.097108 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:50.099234 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:50.596970 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:50.596997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:50.597008 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:50.597012 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:50.599499 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.097376 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:51.097404 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:51.097433 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:51.097437 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:51.099811 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.597585 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:51.597611 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:51.597620 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:51.597624 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:51.600264 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:51.600359 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:52.097120 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:52.097146 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:52.097155 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:52.097159 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:52.100007 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:52.596856 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:52.596893 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:52.596902 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:52.596908 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:52.599385 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:53.097209 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:53.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:53.097245 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:53.097249 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:53.099552 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:53.597353 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:53.597378 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:53.597387 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:53.597396 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:53.599946 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:54.097385 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:54.097410 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:54.097419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:54.097425 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:54.099753 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:54.099849 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:54.597114 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:54.597140 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:54.597152 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:54.597159 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:54.599304 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:55.097077 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:55.097101 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:55.097109 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:55.097116 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:55.099594 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:55.597394 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:55.597430 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:55.597443 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:55.597448 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:55.599922 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:56.097857 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:56.097882 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:56.097891 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:56.097896 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:56.099961 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:56.100052 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:56.597806 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:56.597832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:56.597841 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:56.597846 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:56.600303 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:57.097159 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:57.097187 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:57.097195 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:57.097200 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:57.099508 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:57.597505 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:57.597532 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:57.597541 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:57.597545 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:57.600204 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.097048 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:58.097074 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:58.097082 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:58.097086 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:58.099381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.597205 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:58.597230 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:58.597239 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:58.597245 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:58.599451 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:58.599546 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:54:59.097886 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:59.097918 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:59.097931 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:59.097939 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:59.100163 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:54:59.596982 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:54:59.597010 1221070 round_trippers.go:476] Request Headers:
	I0414 14:54:59.597021 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:54:59.597026 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:54:59.599059 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:00.097066 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:00.097091 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:00.097103 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:00.097109 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:00.099359 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:00.597072 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:00.597098 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:00.597107 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:00.597113 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:00.599230 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:01.096958 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:01.096983 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:01.096991 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:01.096997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:01.099098 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:01.099184 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:01.596893 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:01.596921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:01.596933 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:01.596939 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:01.599452 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:02.097155 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:02.097182 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:02.097191 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:02.097197 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:02.099208 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:02.596931 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:02.596957 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:02.596968 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:02.596973 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:02.598907 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:03.097709 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:03.097736 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:03.097744 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:03.097749 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:03.100088 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:03.100185 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:03.597905 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:03.597933 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:03.597944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:03.597949 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:03.600246 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:04.097651 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:04.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:04.097687 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:04.097693 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:04.100045 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:04.597839 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:04.597876 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:04.597885 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:04.597890 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:04.600163 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.097176 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:05.097200 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:05.097210 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:05.097214 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:05.099624 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.597323 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:05.597350 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:05.597360 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:05.597365 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:05.599598 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:05.599695 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:06.097552 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:06.097582 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:06.097591 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:06.097595 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:06.099900 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:06.597946 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:06.597974 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:06.597982 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:06.597988 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:06.600426 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:07.097279 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:07.097306 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:07.097315 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:07.097320 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:07.099371 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:07.597212 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:07.597236 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:07.597245 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:07.597250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:07.599340 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:08.097240 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:08.097274 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:08.097289 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:08.097296 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:08.099717 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:08.099814 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:08.597662 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:08.597688 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:08.597697 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:08.597702 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:08.599709 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:09.097250 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:09.097278 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:09.097289 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:09.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:09.099634 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:09.597565 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:09.597589 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:09.597598 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:09.597603 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:09.599920 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.097101 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:10.097125 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:10.097136 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:10.097141 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:10.099632 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.597582 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:10.597608 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:10.597617 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:10.597623 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:10.599909 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:10.600015 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:11.097848 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:11.097875 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:11.097884 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:11.097889 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:11.100388 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:11.597033 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:11.597059 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:11.597068 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:11.597073 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:11.599446 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:12.097209 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:12.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:12.097246 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:12.097251 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:12.099596 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:12.597381 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:12.597409 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:12.597419 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:12.597425 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:12.599739 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:13.097653 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:13.097679 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:13.097694 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:13.097698 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:13.100085 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:13.100162 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:13.596932 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:13.596960 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:13.596970 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:13.596976 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:13.599364 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:14.097757 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:14.097784 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:14.097793 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:14.097799 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:14.100496 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:14.597210 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:14.597235 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:14.597244 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:14.597248 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:14.599610 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:15.097782 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:15.097807 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:15.097819 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:15.097824 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:15.101005 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:55:15.101098 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:15.597806 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:15.597832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:15.597841 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:15.597844 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:15.600361 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:16.097098 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:16.097124 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:16.097133 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:16.097138 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:16.099616 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:16.597475 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:16.597501 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:16.597509 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:16.597514 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:16.599989 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.097804 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:17.097832 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:17.097842 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:17.097849 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:17.100125 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.597891 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:17.597921 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:17.597930 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:17.597934 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:17.600307 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:17.600400 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:18.097041 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:18.097068 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:18.097076 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:18.097082 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:18.099561 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:18.597301 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:18.597328 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:18.597337 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:18.597341 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:18.599635 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:19.097188 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:19.097214 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:19.097223 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:19.097228 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:19.099493 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:19.597192 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:19.597215 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:19.597224 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:19.597229 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:19.599599 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:20.097639 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:20.097663 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:20.097671 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:20.097675 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:20.099803 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:20.099912 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:20.597725 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:20.597750 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:20.597759 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:20.597764 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:20.600274 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:21.097135 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:21.097164 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:21.097173 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:21.097178 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:21.099615 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:21.597251 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:21.597300 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:21.597309 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:21.597313 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:21.599653 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.097498 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:22.097523 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:22.097536 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:22.097542 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:22.099623 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.597528 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:22.597557 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:22.597565 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:22.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:22.599837 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:22.599933 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:23.097809 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:23.097835 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:23.097846 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:23.097851 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:23.099889 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:23.597818 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:23.597845 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:23.597858 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:23.597865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:23.599919 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.097248 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:24.097280 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:24.097293 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:24.097299 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:24.099650 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.597564 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:24.597589 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:24.597598 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:24.597603 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:24.600076 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:24.600182 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:25.097211 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:25.097237 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:25.097246 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:25.097250 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:25.099737 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:25.597673 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:25.597700 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:25.597711 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:25.597718 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:25.600363 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:26.097116 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:26.097145 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:26.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:26.097158 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:26.099408 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:26.597105 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:26.597133 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:26.597142 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:26.597147 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:26.599718 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:27.097532 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:27.097559 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:27.097569 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:27.097573 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:27.100132 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:27.100234 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:27.596843 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:27.596866 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:27.596875 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:27.596880 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:27.598858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:28.097716 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:28.097744 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:28.097752 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:28.097759 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:28.100226 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:28.596972 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:28.596999 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:28.597008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:28.597013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:28.599202 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:29.097781 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:29.097804 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:29.097814 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:29.097819 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:29.100259 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:29.100355 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:29.596974 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:29.597007 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:29.597018 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:29.597023 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:29.599234 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:30.097347 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:30.097369 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:30.097379 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:30.097384 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:30.099858 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:30.597703 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:30.597732 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:30.597742 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:30.597747 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:30.600213 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.096866 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:31.096894 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:31.096910 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:31.096925 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:31.098999 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.596844 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:31.596869 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:31.596877 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:31.596881 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:31.599416 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:31.599520 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:32.097294 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:32.097320 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:32.097329 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:32.097334 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:32.099664 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:32.597534 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:32.597562 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:32.597573 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:32.597581 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:32.599997 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.097885 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:33.097913 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:33.097925 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:33.097933 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:33.100424 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.597212 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:33.597245 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:33.597256 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:33.597261 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:33.599737 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:33.599825 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:34.096946 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:34.096977 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:34.096990 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:34.096997 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:34.099325 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:34.597051 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:34.597077 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:34.597088 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:34.597094 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:34.599638 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:35.097797 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:35.097822 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:35.097832 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:35.097839 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:35.100270 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:35.597109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:35.597137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:35.597145 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:35.597150 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:35.599542 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:36.097465 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:36.097491 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:36.097500 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:36.097505 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:36.100187 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:36.100290 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:36.596906 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:36.596932 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:36.596944 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:36.596950 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:36.599839 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:37.097766 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:37.097792 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:37.097801 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:37.097807 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:37.099951 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:37.597950 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:37.597979 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:37.597989 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:37.597993 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:37.600410 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.097271 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:38.097298 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:38.097306 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:38.097311 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:38.099663 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.597601 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:38.597627 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:38.597636 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:38.597647 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:38.600447 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:38.600553 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:39.097748 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:39.097775 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:39.097786 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:39.097794 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:39.100150 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:39.596990 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:39.597019 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:39.597028 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:39.597032 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:39.599406 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:40.097366 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:40.097396 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:40.097409 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:40.097416 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:40.099965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:40.597743 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:40.597771 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:40.597780 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:40.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:40.600273 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:41.096973 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:41.096997 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:41.097006 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:41.097013 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:41.099218 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:41.099337 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:41.596871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:41.596897 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:41.596908 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:41.596913 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:41.599017 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:42.097855 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:42.097889 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:42.097899 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:42.097905 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:42.101284 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:55:42.596957 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:42.596996 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:42.597008 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:42.597016 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:42.599231 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:43.097007 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:43.097034 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:43.097046 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:43.097051 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:43.099362 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:43.099452 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:43.597120 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:43.597147 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:43.597157 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:43.597164 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:43.599396 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:44.097698 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:44.097725 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:44.097734 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:44.097738 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:44.099914 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:44.597690 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:44.597715 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:44.597724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:44.597729 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:44.600159 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.097089 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:45.097112 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:45.097121 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:45.097125 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:45.099361 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.596975 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:45.597002 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:45.597010 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:45.597014 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:45.599569 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:45.599649 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:46.097457 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:46.097483 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:46.097492 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:46.097497 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:46.099821 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:46.597701 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:46.597727 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:46.597735 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:46.597739 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:46.600275 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.097117 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:47.097141 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:47.097150 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:47.097154 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:47.099568 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.597488 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:47.597514 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:47.597522 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:47.597527 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:47.599944 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:47.600100 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:48.096867 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:48.096892 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:48.096908 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:48.096911 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:48.099730 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:48.597476 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:48.597506 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:48.597514 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:48.597520 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:48.599790 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:49.097193 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:49.097219 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:49.097228 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:49.097231 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:49.099213 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:49.596898 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:49.596923 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:49.596931 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:49.596935 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:49.599211 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:50.097588 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:50.097612 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:50.097622 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:50.097626 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:50.099587 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:50.099671 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:50.597293 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:50.597326 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:50.597335 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:50.597346 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:50.599755 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:51.097570 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:51.097599 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:51.097608 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:51.097613 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:51.100622 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:51.597436 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:51.597463 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:51.597472 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:51.597477 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:51.599799 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:52.097594 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:52.097621 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:52.097631 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:52.097635 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:52.100149 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:52.100239 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:52.596871 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:52.596917 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:52.596927 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:52.596932 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:52.598861 1221070 round_trippers.go:581] Response Status: 404 Not Found in 1 milliseconds
	I0414 14:55:53.097658 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:53.097687 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:53.097695 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:53.097701 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:53.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:53.597899 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:53.597931 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:53.597939 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:53.597944 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:53.600381 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:54.097688 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:54.097715 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:54.097724 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:54.097728 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:54.100282 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:54.100365 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:54.597098 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:54.597127 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:54.597135 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:54.597139 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:54.599447 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:55.097620 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:55.097648 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:55.097658 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:55.097663 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:55.100052 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:55.596920 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:55.596949 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:55.596957 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:55.596964 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:55.599399 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.097258 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:56.097285 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:56.097294 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:56.097300 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:56.099626 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.597512 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:56.597537 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:56.597546 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:56.597550 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:56.599780 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:56.599862 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:57.097715 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:57.097744 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:57.097753 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:57.097758 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:57.100249 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:57.597037 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:57.597065 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:57.597073 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:57.597079 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:57.599410 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.097243 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:58.097271 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:58.097281 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:58.097286 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:58.099805 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.597743 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:58.597775 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:58.597785 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:58.597791 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:58.599981 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:58.600099 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:55:59.097525 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:59.097554 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:59.097563 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:59.097567 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:59.100128 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:55:59.596950 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:55:59.596975 1221070 round_trippers.go:476] Request Headers:
	I0414 14:55:59.596983 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:55:59.596987 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:55:59.599509 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:00.097582 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:00.097606 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:00.097615 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:00.097620 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:00.099878 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:00.597634 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:00.597660 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:00.597669 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:00.597673 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:00.599960 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:01.097755 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:01.097779 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:01.097788 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:01.097793 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:01.100104 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:01.100191 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:01.597749 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:01.597778 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:01.597789 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:01.597799 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:01.600379 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:02.097127 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:02.097163 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:02.097172 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:02.097179 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:02.099347 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:02.597084 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:02.597114 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:02.597122 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:02.597126 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:02.599484 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.097203 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:03.097229 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:03.097244 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:03.097249 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:03.099750 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.597532 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:03.597557 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:03.597565 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:03.597570 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:03.599887 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:03.599994 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:04.097156 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:04.097182 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:04.097193 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:04.097202 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:04.099543 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:04.597391 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:04.597422 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:04.597434 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:04.597441 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:04.599613 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:05.097696 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:05.097719 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:05.097727 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:05.097733 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:05.101649 1221070 round_trippers.go:581] Response Status: 404 Not Found in 3 milliseconds
	I0414 14:56:05.597340 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:05.597364 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:05.597373 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:05.597379 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:05.599888 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:05.600026 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:06.097634 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:06.097659 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:06.097668 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:06.097672 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:06.099863 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:06.597652 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:06.597686 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:06.597701 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:06.597707 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:06.599965 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:07.097782 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:07.097812 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:07.097825 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:07.097833 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:07.100367 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:07.597100 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:07.597132 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:07.597144 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:07.597151 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:07.599359 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:08.097183 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:08.097225 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:08.097240 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:08.097248 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:08.099618 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:08.099711 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:08.597331 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:08.597358 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:08.597370 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:08.597377 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:08.599820 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:09.097223 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:09.097254 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:09.097264 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:09.097268 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:09.099655 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:09.597538 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:09.597562 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:09.597570 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:09.597576 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:09.599815 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:10.097831 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:10.097853 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:10.097861 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:10.097865 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:10.100242 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:10.100337 1221070 node_ready.go:53] error getting node "ha-290859-m02": nodes "ha-290859-m02" not found
	I0414 14:56:10.597109 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:10.597137 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:10.597146 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:10.597152 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:10.600167 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:11.097037 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:11.097061 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:11.097070 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:11.097076 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:11.099474 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:11.597114 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:11.597141 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:11.597150 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:11.597155 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:11.599707 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:12.097023 1221070 round_trippers.go:470] GET https://192.168.39.110:8443/api/v1/nodes/ha-290859-m02
	I0414 14:56:12.097048 1221070 round_trippers.go:476] Request Headers:
	I0414 14:56:12.097056 1221070 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0414 14:56:12.097061 1221070 round_trippers.go:480]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0414 14:56:12.099277 1221070 round_trippers.go:581] Response Status: 404 Not Found in 2 milliseconds
	I0414 14:56:12.099371 1221070 node_ready.go:38] duration metric: took 4m0.002706246s for node "ha-290859-m02" to be "Ready" ...
	I0414 14:56:12.101227 1221070 out.go:201] 
	W0414 14:56:12.102352 1221070 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0414 14:56:12.102371 1221070 out.go:270] * 
	W0414 14:56:12.103364 1221070 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 14:56:12.104737 1221070 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea9e85492cab1       6e38f40d628db       4 minutes ago       Running             storage-provisioner       2                   22012253a39e5       storage-provisioner
	6def8b5e81c3c       8c811b4aec35f       4 minutes ago       Running             busybox                   1                   8810167e1850b       busybox-58667487b6-t6bgg
	d9bf8cef6e955       c69fa2e9cbf5f       4 minutes ago       Running             coredns                   1                   ae09d1f35f5bb       coredns-668d6bf9bc-wbn4p
	c3c2f4d5fe419       c69fa2e9cbf5f       4 minutes ago       Running             coredns                   1                   8b812c2dfd4e4       coredns-668d6bf9bc-qnl6q
	607041fc2f4ed       df3849d954c98       4 minutes ago       Running             kindnet-cni               1                   4c291c3e02236       kindnet-hm99t
	acc7b3f819a6b       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       1                   22012253a39e5       storage-provisioner
	1c01d86a74294       f1332858868e1       4 minutes ago       Running             kube-proxy                1                   756822c1e13ce       kube-proxy-cg945
	e8658abcccb8b       b6a454c5a800d       4 minutes ago       Running             kube-controller-manager   1                   b171c03689d46       kube-controller-manager-ha-290859
	29445064369e5       d8e673e7c9983       4 minutes ago       Running             kube-scheduler            1                   6e1304537402c       kube-scheduler-ha-290859
	6bb8bbfa1b317       a9e7e6b294baf       4 minutes ago       Running             etcd                      1                   d32dfc76a4340       etcd-ha-290859
	00b109770be1c       85b7a174738ba       4 minutes ago       Running             kube-apiserver            1                   eb5666eae29e1       kube-apiserver-ha-290859
	6dc42b262abf6       6ff023a402a69       4 minutes ago       Running             kube-vip                  0                   c4bd0bf012eaf       kube-vip-ha-290859
	24e6d7cfe7ea4       8c811b4aec35f       26 minutes ago      Exited              busybox                   0                   78438e8022143       busybox-58667487b6-t6bgg
	731a9f2fe8645       c69fa2e9cbf5f       26 minutes ago      Exited              coredns                   0                   e56d2e4c87eea       coredns-668d6bf9bc-qnl6q
	0ec0a3a234c7c       c69fa2e9cbf5f       26 minutes ago      Exited              coredns                   0                   2818c413e6e32       coredns-668d6bf9bc-wbn4p
	2df8ccb8d6ed9       df3849d954c98       26 minutes ago      Exited              kindnet-cni               0                   08244cfc780bd       kindnet-hm99t
	e22a81661302f       f1332858868e1       26 minutes ago      Exited              kube-proxy                0                   f20a0bcfbd507       kube-proxy-cg945
	8263b35014337       b6a454c5a800d       27 minutes ago      Exited              kube-controller-manager   0                   96ffccfabb2f0       kube-controller-manager-ha-290859
	3607093f95b04       85b7a174738ba       27 minutes ago      Exited              kube-apiserver            0                   7d06c53c8318a       kube-apiserver-ha-290859
	b9d0c94204534       a9e7e6b294baf       27 minutes ago      Exited              etcd                      0                   07c98c2ded11c       etcd-ha-290859
	341626ffff967       d8e673e7c9983       27 minutes ago      Exited              kube-scheduler            0                   d86edf81d4f34       kube-scheduler-ha-290859
	
	
	==> containerd <==
	Apr 14 14:52:05 ha-290859 containerd[832]: time="2025-04-14T14:52:05.640171349Z" level=info msg="StartContainer for \"6def8b5e81c3c293839e823e7db25b60e0f88e530e87f93ad6439e1ef8967337\" returns successfully"
	Apr 14 14:52:06 ha-290859 containerd[832]: time="2025-04-14T14:52:06.457242635Z" level=info msg="RemoveContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\""
	Apr 14 14:52:06 ha-290859 containerd[832]: time="2025-04-14T14:52:06.469888693Z" level=info msg="RemoveContainer for \"922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b\" returns successfully"
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.268681775Z" level=info msg="CreateContainer within sandbox \"22012253a39e523fbee6ecb847d27dbb8e09ad98b80aa344f91a171c063bedc5\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.288966764Z" level=info msg="CreateContainer within sandbox \"22012253a39e523fbee6ecb847d27dbb8e09ad98b80aa344f91a171c063bedc5\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\""
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.289554135Z" level=info msg="StartContainer for \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\""
	Apr 14 14:52:17 ha-290859 containerd[832]: time="2025-04-14T14:52:17.339537509Z" level=info msg="StartContainer for \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.225918045Z" level=info msg="RemoveContainer for \"9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.231418188Z" level=info msg="RemoveContainer for \"9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233079029Z" level=info msg="StopPodSandbox for \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233179127Z" level=info msg="TearDown network for sandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233192370Z" level=info msg="StopPodSandbox for \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233840780Z" level=info msg="RemovePodSandbox for \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233892071Z" level=info msg="Forcibly stopping sandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.233958310Z" level=info msg="TearDown network for sandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.239481391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.239617741Z" level=info msg="RemovePodSandbox \"7b4e857fc4a7278a2912c7bad6709c158c79bd073828baa274e7c8874610feb5\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240179712Z" level=info msg="StopPodSandbox for \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240271309Z" level=info msg="TearDown network for sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240298864Z" level=info msg="StopPodSandbox for \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" returns successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240783074Z" level=info msg="RemovePodSandbox for \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240816354Z" level=info msg="Forcibly stopping sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\""
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.240870755Z" level=info msg="TearDown network for sandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" successfully"
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.245855866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Apr 14 14:52:48 ha-290859 containerd[832]: time="2025-04-14T14:52:48.245939634Z" level=info msg="RemovePodSandbox \"4de376d34ee7f88a6fa395d518e7950ac2b1691d3e1668d0d79130d65133045f\" returns successfully"
	
	
	==> coredns [0ec0a3a234c7c9ab89ca83a237362a229e9c5f0e94fdbf641b886cf994e1cd2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46089 - 56153 "HINFO IN 6072608555509463616.6529762715821029691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009374887s
	[INFO] 10.244.0.4:35907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221161s
	[INFO] 10.244.0.4:36782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005796917s
	[INFO] 10.244.0.4:41522 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189547s
	[INFO] 10.244.0.4:42146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118814s
	[INFO] 10.244.0.4:60607 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123758s
	[INFO] 10.244.0.4:43711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363945s
	[INFO] 10.244.0.4:55165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147511s
	[INFO] 10.244.0.4:37988 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063814s
	[INFO] 10.244.0.4:34715 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110518s
	
	
	==> coredns [731a9f2fe8645b7ec17e0629dba8c56c61702b584cfa519d26449dd6d32827a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 40228 "HINFO IN 6089878548460793106.7503956428927620962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010088983s
	[INFO] 10.244.0.4:56129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00054069s
	[INFO] 10.244.0.4:53926 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015577927s
	[INFO] 10.244.0.4:39454 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 1.017801671s
	[INFO] 10.244.0.4:52928 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006480432s
	[INFO] 10.244.0.4:37155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144828s
	[INFO] 10.244.0.4:60063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003567762s
	[INFO] 10.244.0.4:60207 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153406s
	[INFO] 10.244.0.4:60174 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117303s
	[INFO] 10.244.0.4:60031 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124845s
	[INFO] 10.244.0.4:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177401s
	[INFO] 10.244.0.4:59108 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000291115s
	
	
	==> coredns [c3c2f4d5fe419392ff3850394da92847c7bcfe369f4d0eddffd38c2a59b41025] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48956 - 43158 "HINFO IN 5542730592661564248.5649616312753148618. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009354162s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1967277509]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30002ms):
	Trace[1967277509]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (14:52:35.692)
	Trace[1967277509]: [30.002592464s] [30.002592464s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1343823812]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1343823812]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (14:52:35.693)
	Trace[1343823812]: [30.00250289s] [30.00250289s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2019019817]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30004ms):
	Trace[2019019817]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (14:52:35.694)
	Trace[2019019817]: [30.004408468s] [30.004408468s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d9bf8cef6e9551ba044bfa75d53bebdabf94a544fb35bcba8ae9dda955c97297] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52958 - 12430 "HINFO IN 2501253073000439982.8063739159986489070. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007070061s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1427080852]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1427080852]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:52:35.691)
	Trace[1427080852]: [30.002092041s] [30.002092041s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1959333545]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1959333545]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:52:35.692)
	Trace[1959333545]: [30.002031471s] [30.002031471s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[910229496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30001ms):
	Trace[910229496]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:52:35.691)
	Trace[910229496]: [30.001488485s] [30.001488485s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:56:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:52:02 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    506c18f2-7f12-4001-8285-917ecaddf63d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m20s                  kube-proxy       
	  Normal   Starting                 26m                    kube-proxy       
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27m                    kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m                    kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m                    kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26m                    node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal   NodeReady                26m                    kubelet          Node ha-290859 status is now: NodeReady
	  Normal   Starting                 4m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m38s (x8 over 4m38s)  kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m38s (x8 over 4m38s)  kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m38s (x7 over 4m38s)  kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m25s                  node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Warning  Rebooted                 4m24s                  kubelet          Node ha-290859 has been rebooted, boot id: 506c18f2-7f12-4001-8285-917ecaddf63d
	
	
	==> dmesg <==
	[Apr14 14:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051074] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036733] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.829588] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.946390] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.551280] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.183144] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +0.054346] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061747] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.177698] systemd-fstab-generator[782]: Ignoring "noauto" option for root device
	[  +0.145567] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.269397] systemd-fstab-generator[824]: Ignoring "noauto" option for root device
	[  +1.160092] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +6.952352] kauditd_printk_skb: 197 callbacks suppressed
	[Apr14 14:52] kauditd_printk_skb: 40 callbacks suppressed
	[ +12.604617] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6bb8bbfa1b317897b9bcc96ba49e7c68f83cc4409dd69a72b86f0448aa2519ea] <==
	{"level":"info","ts":"2025-04-14T14:51:55.652582Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","added-peer-id":"fbb007bab925a598","added-peer-peer-urls":["https://192.168.39.110:2380"]}
	{"level":"info","ts":"2025-04-14T14:51:55.652820Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:51:55.652875Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:51:55.657644Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:55.677815Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T14:51:55.678882Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"fbb007bab925a598","initial-advertise-peer-urls":["https://192.168.39.110:2380"],"listen-peer-urls":["https://192.168.39.110:2380"],"advertise-client-urls":["https://192.168.39.110:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.110:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T14:51:55.678927Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T14:51:55.679144Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2025-04-14T14:51:55.679165Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2025-04-14T14:51:56.795570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.795959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgVoteResp from fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.796217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.796240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.797919Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:ha-290859 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:51:56.798371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:51:56.798558Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:51:56.799556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:51:56.799592Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T14:51:56.800393Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:56.801226Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:51:56.800393Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:56.802399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b9d0c942045346e617420beacf1ee53ebaa73b72295bfad233845fe524f8b15c] <==
	{"level":"info","ts":"2025-04-14T14:29:20.942134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.942264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:29:20.943625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:29:20.943655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-04-14T14:29:27.104552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.197172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-14T14:29:27.104712Z","caller":"traceutil/trace.go:171","msg":"trace[2014118741] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:283; }","duration":"161.489617ms","start":"2025-04-14T14:29:26.943197Z","end":"2025-04-14T14:29:27.104687Z","steps":["trace[2014118741] 'range keys from in-memory index tree'  (duration: 161.141805ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:29:27.105569Z","caller":"traceutil/trace.go:171","msg":"trace[1003808847] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"157.128151ms","start":"2025-04-14T14:29:26.948431Z","end":"2025-04-14T14:29:27.105559Z","steps":["trace[1003808847] 'process raft request'  (duration: 84.378612ms)","trace[1003808847] 'compare'  (duration: 71.52798ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:29:27.104865Z","caller":"traceutil/trace.go:171","msg":"trace[43329066] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"119.436827ms","start":"2025-04-14T14:29:26.985404Z","end":"2025-04-14T14:29:27.104841Z","steps":["trace[43329066] 'read index received'  (duration: 47.335931ms)","trace[43329066] 'applied index is now lower than readState.Index'  (duration: 72.100547ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:29:27.105882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.482108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-290859\" limit:1 ","response":"range_response_count:1 size:4024"}
	{"level":"info","ts":"2025-04-14T14:29:27.105907Z","caller":"traceutil/trace.go:171","msg":"trace[1848025885] range","detail":"{range_begin:/registry/minions/ha-290859; range_end:; response_count:1; response_revision:284; }","duration":"120.538719ms","start":"2025-04-14T14:29:26.985360Z","end":"2025-04-14T14:29:27.105899Z","steps":["trace[1848025885] 'agreement among raft nodes before linearized reading'  (duration: 120.384333ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:30:04.979205Z","caller":"traceutil/trace.go:171","msg":"trace[85484590] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"156.247744ms","start":"2025-04-14T14:30:04.822935Z","end":"2025-04-14T14:30:04.979183Z","steps":["trace[85484590] 'process raft request'  (duration: 156.102613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:39:20.967676Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2025-04-14T14:39:20.980951Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"12.971168ms","hash":3281203929,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-04-14T14:39:20.980998Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3281203929,"revision":955,"compact-revision":-1}
	{"level":"info","ts":"2025-04-14T14:42:12.425594Z","caller":"traceutil/trace.go:171","msg":"trace[593749251] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1973; }","duration":"103.549581ms","start":"2025-04-14T14:42:12.322004Z","end":"2025-04-14T14:42:12.425554Z","steps":["trace[593749251] 'read index received'  (duration: 102.720139ms)","trace[593749251] 'applied index is now lower than readState.Index'  (duration: 828.805µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:42:12.426144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.759593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:42:12.426196Z","caller":"traceutil/trace.go:171","msg":"trace[257637869] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1805; }","duration":"104.23976ms","start":"2025-04-14T14:42:12.321948Z","end":"2025-04-14T14:42:12.426188Z","steps":["trace[257637869] 'agreement among raft nodes before linearized reading'  (duration: 103.769974ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:42:12.425685Z","caller":"traceutil/trace.go:171","msg":"trace[874985590] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"128.996586ms","start":"2025-04-14T14:42:12.296675Z","end":"2025-04-14T14:42:12.425672Z","steps":["trace[874985590] 'process raft request'  (duration: 128.079961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:42:29.811595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.362023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932452365827166964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:25989634b465d2f3>","response":"size:42"}
	{"level":"info","ts":"2025-04-14T14:44:20.976766Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1495}
	{"level":"info","ts":"2025-04-14T14:44:20.980966Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1495,"took":"3.550898ms","hash":2769383186,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2031616,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-04-14T14:44:20.981013Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2769383186,"revision":1495,"compact-revision":955}
	{"level":"info","ts":"2025-04-14T14:49:20.985771Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2116}
	{"level":"info","ts":"2025-04-14T14:49:20.990796Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":2116,"took":"4.442405ms","hash":2965091083,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2244608,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-04-14T14:49:20.990930Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2965091083,"revision":2116,"compact-revision":1495}
	
	
	==> kernel <==
	 14:56:26 up 4 min,  0 users,  load average: 0.83, 0.38, 0.15
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2df8ccb8d6ed928a95e69ecd1be2105fc737c699aa26805820a0af0eca5bb50d] <==
	I0414 14:48:44.500441       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:48:54.500620       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:48:54.501802       1 main.go:301] handling current node
	I0414 14:48:54.501933       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:48:54.501959       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:04.501654       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:04.501878       1 main.go:301] handling current node
	I0414 14:49:04.502475       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:04.502663       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:14.500855       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:14.500928       1 main.go:301] handling current node
	I0414 14:49:14.500947       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:14.500953       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:24.509280       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:24.509428       1 main.go:301] handling current node
	I0414 14:49:24.509592       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:24.509696       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:34.500704       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:34.500778       1 main.go:301] handling current node
	I0414 14:49:34.500819       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:34.500825       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:49:44.504658       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:49:44.504751       1 main.go:301] handling current node
	I0414 14:49:44.504856       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:49:44.504972       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [607041fc2f4edc17de3caec2d00a9f9b49a94ed154254da72ec094a0f148db36] <==
	I0414 14:55:16.456277       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:26.465697       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:26.465845       1 main.go:301] handling current node
	I0414 14:55:26.465927       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:26.465968       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:36.463752       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:36.463830       1 main.go:301] handling current node
	I0414 14:55:36.463853       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:36.463859       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:46.456585       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:46.457113       1 main.go:301] handling current node
	I0414 14:55:46.457561       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:46.459726       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:56.464186       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:56.464300       1 main.go:301] handling current node
	I0414 14:55:56.464332       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:56.464345       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:56:06.455081       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:56:06.455167       1 main.go:301] handling current node
	I0414 14:56:06.455204       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:56:06.455229       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:56:16.454747       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:56:16.454884       1 main.go:301] handling current node
	I0414 14:56:16.454938       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:56:16.455070       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [00b109770be1cb3d772b7d440ccc36c098a8627e8280f195c263a0a87a6e0c07] <==
	I0414 14:51:57.932933       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0414 14:51:58.014528       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0414 14:51:58.014629       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0414 14:51:58.014535       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 14:51:58.023891       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0414 14:51:58.024459       1 shared_informer.go:320] Caches are synced for configmaps
	I0414 14:51:58.024473       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0414 14:51:58.024547       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0414 14:51:58.025376       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0414 14:51:58.035556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 14:51:58.035771       1 aggregator.go:171] initial CRD sync complete...
	I0414 14:51:58.035828       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:51:58.035845       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:51:58.035857       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:51:58.036008       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 14:51:58.036120       1 policy_source.go:240] refreshing policies
	I0414 14:51:58.097914       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0414 14:51:58.101123       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:51:58.918987       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:51:59.963976       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:52:04.263824       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0414 14:52:04.306348       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:52:04.363470       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:52:04.453440       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:52:04.454453       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [3607093f95b0430c4841d7be9ed19d0163ff2e9ee2889a44f89bd1ca07bf42d3] <==
	I0414 14:29:22.362271       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:29:22.362276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:29:22.362280       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:29:22.378719       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 14:29:22.457815       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:29:23.164003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0414 14:29:23.168635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0414 14:29:23.168816       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:29:23.763560       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 14:29:23.812117       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 14:29:23.884276       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0414 14:29:23.896601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110]
	I0414 14:29:23.897534       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:29:23.902387       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:29:24.193931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:29:25.780107       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:29:25.808820       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0414 14:29:25.816856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:29:29.653221       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0414 14:29:29.756960       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0414 14:41:55.019097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52466: use of closed network connection
	E0414 14:41:55.440782       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52532: use of closed network connection
	E0414 14:41:55.859929       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52600: use of closed network connection
	E0414 14:41:58.277207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52686: use of closed network connection
	E0414 14:41:58.438151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52698: use of closed network connection
	
	
	==> kube-controller-manager [8263b35014337f6119ba3a0d6487090fd5b1b3b8a002a99623620e847d186847] <==
	I0414 14:42:29.963750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:29.969981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.002µs"
	I0414 14:42:30.275380       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:30.614411       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:33.964410       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-290859-m03"
	I0414 14:42:34.046665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:39.961881       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.191468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-290859-m03"
	I0414 14:42:49.192361       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.201252       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:42:49.216690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.679µs"
	I0414 14:42:49.217122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="45.948µs"
	I0414 14:42:49.230018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.053µs"
	I0414 14:42:52.664944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.387962ms"
	I0414 14:42:52.665652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="82.546µs"
	I0414 14:42:53.979890       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:43:00.010906       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:46:33.503243       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:47:25.635375       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:49:09.052122       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:49:09.070345       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:49:09.083390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="59.905µs"
	I0414 14:49:09.105070       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="10.887319ms"
	I0414 14:49:09.105381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="40.135µs"
	I0414 14:49:14.179848       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	
	
	==> kube-controller-manager [e8658abcccb8b10d531ad775050d96f3375e484efcbaba4d5509a7a22f3608a9] <==
	I0414 14:52:01.154050       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:52:01.197460       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:52:01.197682       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 14:52:01.197815       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 14:52:01.207566       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:52:02.153254       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:52:04.272410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.559874ms"
	I0414 14:52:04.273686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="51.226µs"
	I0414 14:52:04.439056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.737014ms"
	I0414 14:52:04.439344       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="242.032µs"
	I0414 14:52:04.459376       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.444236ms"
	I0414 14:52:04.460062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="174.256µs"
	I0414 14:52:06.474796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.379µs"
	I0414 14:52:06.508895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.708µs"
	I0414 14:52:06.532239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="7.280916ms"
	I0414 14:52:06.532571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="115.282µs"
	I0414 14:52:38.517073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.719998ms"
	I0414 14:52:38.517449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="101.016µs"
	I0414 14:52:38.546449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.225146ms"
	I0414 14:52:38.546575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.763µs"
	I0414 14:56:15.487465       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:56:15.503080       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:56:15.536625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="25.061691ms"
	I0414 14:56:15.546233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.560251ms"
	I0414 14:56:15.546295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="27.858µs"
	
	
	==> kube-proxy [1c01d86a74294bbfd5f487ec85ffc0f35cc4b979ad90c940eea5a17a8e5f46fb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:52:05.724966       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:52:05.743076       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:52:05.743397       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:52:05.784686       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:52:05.784731       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:52:05.784755       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:52:05.786929       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:52:05.787617       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:52:05.787645       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:52:05.789983       1 config.go:199] "Starting service config controller"
	I0414 14:52:05.790536       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:52:05.791108       1 config.go:329] "Starting node config controller"
	I0414 14:52:05.791131       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:52:05.794555       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:52:05.796335       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:52:05.891275       1 shared_informer.go:320] Caches are synced for service config
	I0414 14:52:05.891550       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:52:05.901825       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e22a81661302ff340c9846a7a06a13d955ab98cfe8e7088e0c805fb4f3eee8a2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:29:30.555771       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:29:30.580550       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:29:30.580640       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:29:30.617235       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:29:30.617293       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:29:30.617328       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:29:30.620046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:29:30.620989       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:29:30.621018       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:29:30.625365       1 config.go:329] "Starting node config controller"
	I0414 14:29:30.625863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:29:30.628597       1 config.go:199] "Starting service config controller"
	I0414 14:29:30.628644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:29:30.628665       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:29:30.628683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:29:30.726314       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:29:30.729639       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:29:30.729680       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [29445064369e58250458efcfeed9a28e6da75ce4bcb6f15c9e58844eb1ba811e] <==
	I0414 14:51:55.842470       1 serving.go:386] Generated self-signed cert in-memory
	W0414 14:51:57.981716       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 14:51:57.981805       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 14:51:57.981829       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 14:51:57.981840       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 14:51:58.035351       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 14:51:58.035404       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:51:58.038565       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 14:51:58.038986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 14:51:58.039147       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 14:51:58.039434       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 14:51:58.140699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [341626ffff967b14e3bfaa050905eba2b82a07223c0356ee50b5deeef6d9898b] <==
	E0414 14:29:22.288686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.287191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:22.288704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:22.286394       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:22.288719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0414 14:29:22.285771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.108289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:29:23.108351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.153824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.153954       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.203744       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:29:23.203977       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:29:23.367236       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 14:29:23.367550       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.396026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:29:23.396243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.401643       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.401820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.425454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 14:29:23.425684       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.433181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:29:23.433222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:29:23.457688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 14:29:23.457949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 14:29:25.662221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:52:06 ha-290859 kubelet[906]: I0414 14:52:06.454237     906 scope.go:117] "RemoveContainer" containerID="922f97d06563e10c12ce83edd45e4f1aa0b78449dcdb50b413a7f4fc80cc346b"
	Apr 14 14:52:06 ha-290859 kubelet[906]: I0414 14:52:06.455356     906 scope.go:117] "RemoveContainer" containerID="acc7b3f819a6b9fa74f5e5423aac252faa39c9dec24306ff130436d9a722188a"
	Apr 14 14:52:06 ha-290859 kubelet[906]: E0414 14:52:06.455566     906 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a98bb55f-5a73-4436-82eb-ae7534928039)\"" pod="kube-system/storage-provisioner" podUID="a98bb55f-5a73-4436-82eb-ae7534928039"
	Apr 14 14:52:17 ha-290859 kubelet[906]: I0414 14:52:17.265870     906 scope.go:117] "RemoveContainer" containerID="acc7b3f819a6b9fa74f5e5423aac252faa39c9dec24306ff130436d9a722188a"
	Apr 14 14:52:48 ha-290859 kubelet[906]: I0414 14:52:48.224225     906 scope.go:117] "RemoveContainer" containerID="9914f8879fc4321c682c89c4d9b8a4cf65aa1773b5281eca94e0f93095a24f4d"
	Apr 14 14:52:48 ha-290859 kubelet[906]: E0414 14:52:48.281657     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:52:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:52:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:52:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:52:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:53:48 ha-290859 kubelet[906]: E0414 14:53:48.279850     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:53:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:53:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:53:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:53:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:54:48 ha-290859 kubelet[906]: E0414 14:54:48.287249     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:54:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:54:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:54:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:54:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:55:48 ha-290859 kubelet[906]: E0414 14:55:48.279366     906 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 14:55:48 ha-290859 kubelet[906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 14:55:48 ha-290859 kubelet[906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 14:55:48 ha-290859 kubelet[906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:55:48 ha-290859 kubelet[906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-bfghg busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-bfghg busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-bfghg busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-bfghg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6l76h (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-6l76h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  11s   default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	
	
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  4m26s (x2 over 4m29s)  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  16m (x3 over 26m)      default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  13m (x2 over 13m)      default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8m1s (x3 over 13m)     default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  7m7s (x2 over 7m18s)   default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (92.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 stop -v=7 --alsologtostderr
E0414 14:57:59.575544 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 stop -v=7 --alsologtostderr: (1m32.198317318s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr: exit status 7 (94.119795ms)

                                                
                                                
-- stdout --
	ha-290859
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-290859-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:57:59.747140 1223170 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:57:59.747338 1223170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:57:59.747350 1223170 out.go:358] Setting ErrFile to fd 2...
	I0414 14:57:59.747356 1223170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:57:59.747598 1223170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:57:59.747836 1223170 out.go:352] Setting JSON to false
	I0414 14:57:59.747878 1223170 mustload.go:65] Loading cluster: ha-290859
	I0414 14:57:59.747991 1223170 notify.go:220] Checking for updates...
	I0414 14:57:59.748323 1223170 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:57:59.748360 1223170 status.go:174] checking status of ha-290859 ...
	I0414 14:57:59.748824 1223170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:57:59.748890 1223170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:57:59.766834 1223170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I0414 14:57:59.767393 1223170 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:57:59.767909 1223170 main.go:141] libmachine: Using API Version  1
	I0414 14:57:59.767932 1223170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:57:59.768371 1223170 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:57:59.768551 1223170 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:57:59.770068 1223170 status.go:371] ha-290859 host status = "Stopped" (err=<nil>)
	I0414 14:57:59.770082 1223170 status.go:384] host is not running, skipping remaining checks
	I0414 14:57:59.770088 1223170 status.go:176] ha-290859 status: &{Name:ha-290859 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:57:59.770107 1223170 status.go:174] checking status of ha-290859-m02 ...
	I0414 14:57:59.770429 1223170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:57:59.770470 1223170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:57:59.785997 1223170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I0414 14:57:59.786465 1223170 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:57:59.786937 1223170 main.go:141] libmachine: Using API Version  1
	I0414 14:57:59.786955 1223170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:57:59.787302 1223170 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:57:59.787485 1223170 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:57:59.789206 1223170 status.go:371] ha-290859-m02 host status = "Stopped" (err=<nil>)
	I0414 14:57:59.789222 1223170 status.go:384] host is not running, skipping remaining checks
	I0414 14:57:59.789227 1223170 status.go:176] ha-290859-m02 status: &{Name:ha-290859-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-290859 status -v=7 --alsologtostderr": ha-290859
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-290859-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859: exit status 7 (69.149809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-290859" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (92.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (47.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-290859 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-290859 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: signal: killed (44.954515768s)

                                                
                                                
-- stdout --
	* [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	* Restarting existing kvm2 VM for "ha-290859" ...
	* Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	* Enabled addons: 
	
	* Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	* Restarting existing kvm2 VM for "ha-290859-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.110

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:57:59.908690 1223212 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:57:59.908993 1223212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:57:59.909003 1223212 out.go:358] Setting ErrFile to fd 2...
	I0414 14:57:59.909007 1223212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:57:59.909197 1223212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:57:59.909730 1223212 out.go:352] Setting JSON to false
	I0414 14:57:59.910784 1223212 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":24023,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:57:59.910901 1223212 start.go:139] virtualization: kvm guest
	I0414 14:57:59.912952 1223212 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:57:59.914339 1223212 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:57:59.914365 1223212 notify.go:220] Checking for updates...
	I0414 14:57:59.916736 1223212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:57:59.918177 1223212 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:57:59.919485 1223212 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:57:59.920708 1223212 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:57:59.921837 1223212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:57:59.923501 1223212 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:57:59.923922 1223212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:57:59.924007 1223212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:57:59.939633 1223212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0414 14:57:59.940115 1223212 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:57:59.940678 1223212 main.go:141] libmachine: Using API Version  1
	I0414 14:57:59.940716 1223212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:57:59.941059 1223212 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:57:59.941244 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:57:59.941520 1223212 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:57:59.941821 1223212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:57:59.941869 1223212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:57:59.957233 1223212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34875
	I0414 14:57:59.957737 1223212 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:57:59.958190 1223212 main.go:141] libmachine: Using API Version  1
	I0414 14:57:59.958214 1223212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:57:59.958531 1223212 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:57:59.958723 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:57:59.993970 1223212 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:57:59.994983 1223212 start.go:297] selected driver: kvm2
	I0414 14:57:59.995000 1223212 start.go:901] validating driver "kvm2" against &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-29
0859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:57:59.995211 1223212 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:57:59.995687 1223212 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:57:59.995790 1223212 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:58:00.011995 1223212 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:58:00.012701 1223212 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:58:00.012737 1223212 cni.go:84] Creating CNI manager for ""
	I0414 14:58:00.012788 1223212 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0414 14:58:00.012855 1223212 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false
nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:58:00.013016 1223212 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:58:00.015136 1223212 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:58:00.016292 1223212 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:58:00.016334 1223212 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:58:00.016345 1223212 cache.go:56] Caching tarball of preloaded images
	I0414 14:58:00.016446 1223212 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:58:00.016459 1223212 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:58:00.016597 1223212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:58:00.016798 1223212 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:58:00.016846 1223212 start.go:364] duration metric: took 27.263µs to acquireMachinesLock for "ha-290859"
	I0414 14:58:00.016866 1223212 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:58:00.016874 1223212 fix.go:54] fixHost starting: 
	I0414 14:58:00.017155 1223212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:58:00.017213 1223212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:58:00.032664 1223212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35747
	I0414 14:58:00.033250 1223212 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:58:00.033755 1223212 main.go:141] libmachine: Using API Version  1
	I0414 14:58:00.033780 1223212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:58:00.034149 1223212 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:58:00.034367 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:00.034554 1223212 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:58:00.036208 1223212 fix.go:112] recreateIfNeeded on ha-290859: state=Stopped err=<nil>
	I0414 14:58:00.036248 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	W0414 14:58:00.036397 1223212 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:58:00.038188 1223212 out.go:177] * Restarting existing kvm2 VM for "ha-290859" ...
	I0414 14:58:00.039436 1223212 main.go:141] libmachine: (ha-290859) Calling .Start
	I0414 14:58:00.039637 1223212 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:58:00.039661 1223212 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:58:00.040481 1223212 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:58:00.040723 1223212 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:58:00.041001 1223212 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:58:00.041662 1223212 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:58:01.250464 1223212 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:58:01.251552 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:01.251894 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:01.252028 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:01.251923 1223247 retry.go:31] will retry after 221.862556ms: waiting for domain to come up
	I0414 14:58:01.475615 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:01.476157 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:01.476275 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:01.476211 1223247 retry.go:31] will retry after 281.470223ms: waiting for domain to come up
	I0414 14:58:01.759949 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:01.760507 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:01.760530 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:01.760471 1223247 retry.go:31] will retry after 452.37336ms: waiting for domain to come up
	I0414 14:58:02.214146 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:02.214560 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:02.214588 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:02.214522 1223247 retry.go:31] will retry after 404.819056ms: waiting for domain to come up
	I0414 14:58:02.621118 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:02.621581 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:02.621614 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:02.621531 1223247 retry.go:31] will retry after 614.590589ms: waiting for domain to come up
	I0414 14:58:03.237459 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:03.237956 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:03.238007 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:03.237910 1223247 retry.go:31] will retry after 643.121119ms: waiting for domain to come up
	I0414 14:58:03.882822 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:03.883240 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:03.883285 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:03.883214 1223247 retry.go:31] will retry after 1.002645406s: waiting for domain to come up
	I0414 14:58:04.887497 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:04.888012 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:04.888047 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:04.887973 1223247 retry.go:31] will retry after 1.241670442s: waiting for domain to come up
	I0414 14:58:06.131338 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:06.131753 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:06.131782 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:06.131710 1223247 retry.go:31] will retry after 1.35348732s: waiting for domain to come up
	I0414 14:58:07.487277 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:07.487771 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:07.487818 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:07.487730 1223247 retry.go:31] will retry after 1.453121759s: waiting for domain to come up
	I0414 14:58:08.943543 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:08.944076 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:08.944139 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:08.944037 1223247 retry.go:31] will retry after 2.633823626s: waiting for domain to come up
	I0414 14:58:11.579709 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:11.580096 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:11.580130 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:11.580056 1223247 retry.go:31] will retry after 2.536944167s: waiting for domain to come up
	I0414 14:58:14.119779 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:14.120215 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:14.120294 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:14.120145 1223247 retry.go:31] will retry after 3.647827366s: waiting for domain to come up
	I0414 14:58:17.771694 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.772226 1223212 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:58:17.772251 1223212 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:58:17.772278 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.772678 1223212 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:58:17.772734 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:17.772741 1223212 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:58:17.772763 1223212 main.go:141] libmachine: (ha-290859) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"}
	I0414 14:58:17.772772 1223212 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:58:17.774969 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.775382 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:17.775403 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.775553 1223212 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:58:17.775602 1223212 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:58:17.775636 1223212 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:58:17.775652 1223212 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:58:17.775669 1223212 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:58:17.903246 1223212 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:58:17.903656 1223212 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:58:17.904372 1223212 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:58:17.907063 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.907435 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:17.907457 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.907692 1223212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:58:17.907888 1223212 machine.go:93] provisionDockerMachine start ...
	I0414 14:58:17.907919 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:17.908126 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:17.910271 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.910626 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:17.910671 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.910737 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:17.910909 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:17.911086 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:17.911243 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:17.911462 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:17.911706 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:58:17.911722 1223212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:58:18.023283 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:58:18.023320 1223212 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:58:18.023571 1223212 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:58:18.023599 1223212 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:58:18.023805 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.026202 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.026519 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.026551 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.026756 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.026939 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.027108 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.027324 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.027544 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:18.027750 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:58:18.027761 1223212 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:58:18.152016 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:58:18.152050 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.154987 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.155441 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.155474 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.155648 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.155851 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.156014 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.156225 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.156390 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:18.156615 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:58:18.156636 1223212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:58:18.279233 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:58:18.279292 1223212 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:58:18.279314 1223212 buildroot.go:174] setting up certificates
	I0414 14:58:18.279325 1223212 provision.go:84] configureAuth start
	I0414 14:58:18.279335 1223212 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:58:18.279683 1223212 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:58:18.282508 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.282868 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.282892 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.283056 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.285526 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.285895 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.285936 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.286056 1223212 provision.go:143] copyHostCerts
	I0414 14:58:18.286090 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:58:18.286129 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:58:18.286160 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:58:18.286238 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:58:18.286356 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:58:18.286383 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:58:18.286390 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:58:18.286436 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:58:18.286522 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:58:18.286544 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:58:18.286550 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:58:18.286587 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:58:18.286777 1223212 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:58:18.404382 1223212 provision.go:177] copyRemoteCerts
	I0414 14:58:18.404469 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:58:18.404506 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.407083 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.407459 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.407486 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.407697 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.407905 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.408063 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.408215 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:58:18.492950 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:58:18.493034 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:58:18.514977 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:58:18.515064 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:58:18.535925 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:58:18.535992 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:58:18.556259 1223212 provision.go:87] duration metric: took 276.918906ms to configureAuth
	I0414 14:58:18.556283 1223212 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:58:18.556535 1223212 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:58:18.556551 1223212 machine.go:96] duration metric: took 648.650265ms to provisionDockerMachine
	I0414 14:58:18.556561 1223212 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:58:18.556576 1223212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:58:18.556625 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.556990 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:58:18.557040 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.559442 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.559758 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.559790 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.559961 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.560142 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.560322 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.560472 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:58:18.645532 1223212 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:58:18.649197 1223212 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:58:18.649221 1223212 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:58:18.649296 1223212 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:58:18.649398 1223212 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:58:18.649410 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:58:18.649512 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:58:18.658292 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:58:18.679212 1223212 start.go:296] duration metric: took 122.634461ms for postStartSetup
	I0414 14:58:18.679269 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.679626 1223212 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:58:18.679661 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.682445 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.682838 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.682859 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.683027 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.683228 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.683420 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.683626 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:58:18.773347 1223212 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:58:18.773415 1223212 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:58:18.812041 1223212 fix.go:56] duration metric: took 18.795158701s for fixHost
	I0414 14:58:18.812091 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.815201 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.815656 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.815683 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.815945 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.816172 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.816350 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.816491 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.816651 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:18.816863 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:58:18.816872 1223212 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:58:18.931588 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642698.905353914
	
	I0414 14:58:18.931629 1223212 fix.go:216] guest clock: 1744642698.905353914
	I0414 14:58:18.931638 1223212 fix.go:229] Guest: 2025-04-14 14:58:18.905353914 +0000 UTC Remote: 2025-04-14 14:58:18.812072502 +0000 UTC m=+18.943703181 (delta=93.281412ms)
	I0414 14:58:18.931658 1223212 fix.go:200] guest clock delta is within tolerance: 93.281412ms
	I0414 14:58:18.931663 1223212 start.go:83] releasing machines lock for "ha-290859", held for 18.91480633s
	I0414 14:58:18.931683 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.931990 1223212 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:58:18.934457 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.934814 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.934836 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.934999 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.935428 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.935599 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.935705 1223212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:58:18.935762 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.935859 1223212 ssh_runner.go:195] Run: cat /version.json
	I0414 14:58:18.935876 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.938492 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.938853 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.938888 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.938907 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.939020 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.939200 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.939356 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.939491 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.939514 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.939594 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:58:18.939704 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.939854 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.940017 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.940204 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:58:19.049558 1223212 ssh_runner.go:195] Run: systemctl --version
	I0414 14:58:19.055598 1223212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:58:19.060748 1223212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:58:19.060807 1223212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:58:19.075760 1223212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:58:19.075796 1223212 start.go:495] detecting cgroup driver to use...
	I0414 14:58:19.075857 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:58:19.105781 1223212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:58:19.118798 1223212 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:58:19.118856 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:58:19.131627 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:58:19.144090 1223212 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:58:19.258491 1223212 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:58:19.395084 1223212 docker.go:233] disabling docker service ...
	I0414 14:58:19.395173 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:58:19.408548 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:58:19.421029 1223212 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:58:19.555293 1223212 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:58:19.666947 1223212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:58:19.679686 1223212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:58:19.695999 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:58:19.705372 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:58:19.714794 1223212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:58:19.714866 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:58:19.724385 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:58:19.733717 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:58:19.742978 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:58:19.752264 1223212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:58:19.762082 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:58:19.771857 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:58:19.781591 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:58:19.791123 1223212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:58:19.799649 1223212 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:58:19.799703 1223212 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:58:19.811997 1223212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:58:19.820445 1223212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:58:19.932516 1223212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:58:19.960151 1223212 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:58:19.960240 1223212 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:58:19.964906 1223212 retry.go:31] will retry after 786.537519ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:58:20.752230 1223212 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:58:20.757280 1223212 start.go:563] Will wait 60s for crictl version
	I0414 14:58:20.757337 1223212 ssh_runner.go:195] Run: which crictl
	I0414 14:58:20.760924 1223212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:58:20.797226 1223212 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:58:20.797293 1223212 ssh_runner.go:195] Run: containerd --version
	I0414 14:58:20.822463 1223212 ssh_runner.go:195] Run: containerd --version
	I0414 14:58:20.844657 1223212 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:58:20.845956 1223212 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:58:20.848590 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:20.848907 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:20.848937 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:20.849127 1223212 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:58:20.852831 1223212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:58:20.865011 1223212 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:fals
e metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:58:20.865148 1223212 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:58:20.865196 1223212 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:58:20.896286 1223212 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:58:20.896310 1223212 containerd.go:534] Images already preloaded, skipping extraction
	I0414 14:58:20.896363 1223212 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:58:20.926523 1223212 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:58:20.926548 1223212 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:58:20.926563 1223212 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:58:20.926675 1223212 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:58:20.926741 1223212 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:58:20.957700 1223212 cni.go:84] Creating CNI manager for ""
	I0414 14:58:20.957723 1223212 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0414 14:58:20.957737 1223212 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:58:20.957757 1223212 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:58:20.957864 1223212 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:58:20.957885 1223212 kube-vip.go:115] generating kube-vip config ...
	I0414 14:58:20.957935 1223212 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:58:20.980561 1223212 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:58:20.980679 1223212 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:58:20.980734 1223212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:58:20.992163 1223212 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:58:20.992242 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:58:21.000726 1223212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:58:21.016373 1223212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:58:21.031630 1223212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:58:21.046888 1223212 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:58:21.062383 1223212 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:58:21.065785 1223212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:58:21.076490 1223212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:58:21.181513 1223212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:58:21.198460 1223212 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:58:21.198484 1223212 certs.go:194] generating shared ca certs ...
	I0414 14:58:21.198507 1223212 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:58:21.198675 1223212 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:58:21.198746 1223212 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:58:21.198770 1223212 certs.go:256] generating profile certs ...
	I0414 14:58:21.198895 1223212 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:58:21.198988 1223212 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d
	I0414 14:58:21.199060 1223212 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:58:21.199084 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:58:21.199106 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:58:21.199124 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:58:21.199142 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:58:21.199160 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:58:21.199187 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:58:21.199220 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:58:21.199240 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:58:21.199340 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:58:21.199389 1223212 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:58:21.199405 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:58:21.199443 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:58:21.199480 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:58:21.199516 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:58:21.199569 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:58:21.199619 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:58:21.199644 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:58:21.199662 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:58:21.200312 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:58:21.245036 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:58:21.270226 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:58:21.301299 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:58:21.329835 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:58:21.357424 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:58:21.381219 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:58:21.405398 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:58:21.441670 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:58:21.480779 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:58:21.531604 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:58:21.571382 1223212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:58:21.594634 1223212 ssh_runner.go:195] Run: openssl version
	I0414 14:58:21.600389 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:58:21.611391 1223212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:58:21.615987 1223212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:58:21.616052 1223212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:58:21.622246 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:58:21.642602 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:58:21.660916 1223212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:58:21.665366 1223212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:58:21.665425 1223212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:58:21.672239 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:58:21.687862 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:58:21.701990 1223212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:58:21.707086 1223212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:58:21.707157 1223212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:58:21.713255 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:58:21.729614 1223212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:58:21.736937 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 14:58:21.745014 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 14:58:21.751550 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 14:58:21.758220 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 14:58:21.766030 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 14:58:21.771700 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 14:58:21.776978 1223212 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false m
etallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:58:21.777094 1223212 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:58:21.777176 1223212 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:58:21.809866 1223212 cri.go:89] found id: "ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26"
	I0414 14:58:21.809897 1223212 cri.go:89] found id: "d9bf8cef6e9551ba044bfa75d53bebdabf94a544fb35bcba8ae9dda955c97297"
	I0414 14:58:21.809902 1223212 cri.go:89] found id: "c3c2f4d5fe419392ff3850394da92847c7bcfe369f4d0eddffd38c2a59b41025"
	I0414 14:58:21.809906 1223212 cri.go:89] found id: "607041fc2f4edc17de3caec2d00a9f9b49a94ed154254da72ec094a0f148db36"
	I0414 14:58:21.809910 1223212 cri.go:89] found id: "1c01d86a74294bbfd5f487ec85ffc0f35cc4b979ad90c940eea5a17a8e5f46fb"
	I0414 14:58:21.809921 1223212 cri.go:89] found id: "e8658abcccb8b10d531ad775050d96f3375e484efcbaba4d5509a7a22f3608a9"
	I0414 14:58:21.809926 1223212 cri.go:89] found id: "29445064369e58250458efcfeed9a28e6da75ce4bcb6f15c9e58844eb1ba811e"
	I0414 14:58:21.809929 1223212 cri.go:89] found id: "6bb8bbfa1b317897b9bcc96ba49e7c68f83cc4409dd69a72b86f0448aa2519ea"
	I0414 14:58:21.809934 1223212 cri.go:89] found id: "00b109770be1cb3d772b7d440ccc36c098a8627e8280f195c263a0a87a6e0c07"
	I0414 14:58:21.809941 1223212 cri.go:89] found id: "6dc42b262abf6aa5624bcc0028b6e34ab24ddcaffd2215fc0ae3cc2554bd37e7"
	I0414 14:58:21.809946 1223212 cri.go:89] found id: ""
	I0414 14:58:21.809998 1223212 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0414 14:58:21.823102 1223212 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:58:21Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0414 14:58:21.823202 1223212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:58:21.832381 1223212 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 14:58:21.832400 1223212 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 14:58:21.832444 1223212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 14:58:21.841384 1223212 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:58:21.841800 1223212 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-290859" does not appear in /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:58:21.841959 1223212 kubeconfig.go:62] /home/jenkins/minikube-integration/20512-1196368/kubeconfig needs updating (will repair): [kubeconfig missing "ha-290859" cluster setting kubeconfig missing "ha-290859" context setting]
	I0414 14:58:21.842303 1223212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:58:21.842683 1223212 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:58:21.842848 1223212 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.110:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:58:21.843405 1223212 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:58:21.843359 1223212 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:58:21.843746 1223212 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:58:21.843757 1223212 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:58:21.843768 1223212 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:58:21.844495 1223212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 14:58:21.853774 1223212 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0414 14:58:21.853792 1223212 kubeadm.go:597] duration metric: took 21.386394ms to restartPrimaryControlPlane
	I0414 14:58:21.853799 1223212 kubeadm.go:394] duration metric: took 76.837265ms to StartCluster
	I0414 14:58:21.853812 1223212 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:58:21.853868 1223212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:58:21.854394 1223212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:58:21.854586 1223212 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:58:21.854607 1223212 start.go:241] waiting for startup goroutines ...
	I0414 14:58:21.854630 1223212 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:58:21.854791 1223212 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:58:21.857034 1223212 out.go:177] * Enabled addons: 
	I0414 14:58:21.858001 1223212 addons.go:514] duration metric: took 3.389634ms for enable addons: enabled=[]
	I0414 14:58:21.858033 1223212 start.go:246] waiting for cluster config update ...
	I0414 14:58:21.858046 1223212 start.go:255] writing updated cluster config ...
	I0414 14:58:21.859392 1223212 out.go:201] 
	I0414 14:58:21.860612 1223212 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:58:21.860707 1223212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:58:21.862060 1223212 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:58:21.863006 1223212 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:58:21.863027 1223212 cache.go:56] Caching tarball of preloaded images
	I0414 14:58:21.863123 1223212 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:58:21.863134 1223212 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:58:21.863217 1223212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:58:21.863406 1223212 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:58:21.863451 1223212 start.go:364] duration metric: took 26.83µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:58:21.863466 1223212 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:58:21.863473 1223212 fix.go:54] fixHost starting: m02
	I0414 14:58:21.863734 1223212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:58:21.863768 1223212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:58:21.878965 1223212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0414 14:58:21.879467 1223212 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:58:21.879953 1223212 main.go:141] libmachine: Using API Version  1
	I0414 14:58:21.879973 1223212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:58:21.880327 1223212 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:58:21.880531 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:21.880714 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:58:21.882037 1223212 fix.go:112] recreateIfNeeded on ha-290859-m02: state=Stopped err=<nil>
	I0414 14:58:21.882056 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	W0414 14:58:21.882236 1223212 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:58:21.884404 1223212 out.go:177] * Restarting existing kvm2 VM for "ha-290859-m02" ...
	I0414 14:58:21.885489 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .Start
	I0414 14:58:21.885650 1223212 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:58:21.885665 1223212 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:58:21.886323 1223212 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:58:21.886645 1223212 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:58:21.886949 1223212 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:58:21.887589 1223212 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:58:23.097121 1223212 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:58:23.098035 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:23.098452 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:23.098564 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:23.098455 1223387 retry.go:31] will retry after 233.237841ms: waiting for domain to come up
	I0414 14:58:23.332867 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:23.333383 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:23.333400 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:23.333351 1223387 retry.go:31] will retry after 381.899222ms: waiting for domain to come up
	I0414 14:58:23.716962 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:23.717333 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:23.717359 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:23.717318 1223387 retry.go:31] will retry after 412.191626ms: waiting for domain to come up
	I0414 14:58:24.130877 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:24.131406 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:24.131437 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:24.131372 1223387 retry.go:31] will retry after 414.091417ms: waiting for domain to come up
	I0414 14:58:24.547112 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:24.547626 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:24.547654 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:24.547593 1223387 retry.go:31] will retry after 644.002595ms: waiting for domain to come up
	I0414 14:58:25.193608 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:25.194062 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:25.194112 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:25.194018 1223387 retry.go:31] will retry after 830.541478ms: waiting for domain to come up
	I0414 14:58:26.026072 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:26.026545 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:26.026574 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:26.026503 1223387 retry.go:31] will retry after 1.141092073s: waiting for domain to come up
	I0414 14:58:27.169323 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:27.169749 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:27.169775 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:27.169715 1223387 retry.go:31] will retry after 1.081212512s: waiting for domain to come up
	I0414 14:58:28.252530 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:28.252969 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:28.253063 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:28.252956 1223387 retry.go:31] will retry after 1.510553531s: waiting for domain to come up
	I0414 14:58:29.764716 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:29.765248 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:29.765280 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:29.765202 1223387 retry.go:31] will retry after 1.415152488s: waiting for domain to come up
	I0414 14:58:31.182558 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:31.183006 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:31.183044 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:31.182956 1223387 retry.go:31] will retry after 2.534892478s: waiting for domain to come up
	I0414 14:58:33.720236 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:33.720529 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:33.720554 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:33.720505 1223387 retry.go:31] will retry after 3.490878268s: waiting for domain to come up
	I0414 14:58:37.213273 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:37.213780 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:37.213808 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:37.213759 1223387 retry.go:31] will retry after 4.456200887s: waiting for domain to come up
	I0414 14:58:41.675426 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.675912 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.675933 1223212 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:58:41.675974 1223212 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:58:41.676508 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:41.676570 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"}
	I0414 14:58:41.676591 1223212 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:58:41.676605 1223212 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:58:41.676613 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:58:41.678641 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.678981 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:41.679009 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.679153 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:58:41.679193 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:58:41.679227 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:58:41.679240 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:58:41.679273 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:58:41.799010 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:58:41.799442 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:58:41.800097 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:58:41.802533 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.802965 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:41.803004 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.803343 1223212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:58:41.803578 1223212 machine.go:93] provisionDockerMachine start ...
	I0414 14:58:41.803601 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:41.803838 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:41.806135 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.806485 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:41.806515 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.806666 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:41.806832 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:41.806988 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:41.807098 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:41.807231 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:41.807445 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:58:41.807455 1223212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:58:41.903915 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:58:41.903952 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:58:41.904210 1223212 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:58:41.904246 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:58:41.904515 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:41.907832 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.908302 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:41.908340 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.908525 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:41.908726 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:41.908870 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:41.908993 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:41.909172 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:41.909456 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:58:41.909476 1223212 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:58:42.022769 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:58:42.022799 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.025802 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.026202 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.026236 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.026466 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.026685 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.026852 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.026987 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.027137 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:42.027415 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:58:42.027436 1223212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:58:42.131347 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:58:42.131386 1223212 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:58:42.131407 1223212 buildroot.go:174] setting up certificates
	I0414 14:58:42.131419 1223212 provision.go:84] configureAuth start
	I0414 14:58:42.131436 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:58:42.131786 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:58:42.134732 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.135112 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.135145 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.135324 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.137944 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.138395 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.138430 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.138552 1223212 provision.go:143] copyHostCerts
	I0414 14:58:42.138580 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:58:42.138615 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:58:42.138623 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:58:42.138676 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:58:42.138749 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:58:42.138766 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:58:42.138772 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:58:42.138790 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:58:42.138830 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:58:42.138846 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:58:42.138852 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:58:42.138869 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:58:42.138915 1223212 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:58:42.180132 1223212 provision.go:177] copyRemoteCerts
	I0414 14:58:42.180196 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:58:42.180229 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.183220 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.183709 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.183744 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.183976 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.184199 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.184398 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.184547 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:58:42.261124 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:58:42.261193 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:58:42.286821 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:58:42.286907 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:58:42.312791 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:58:42.312861 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:58:42.338189 1223212 provision.go:87] duration metric: took 206.752494ms to configureAuth
	I0414 14:58:42.338230 1223212 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:58:42.338511 1223212 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:58:42.338534 1223212 machine.go:96] duration metric: took 534.941447ms to provisionDockerMachine
	I0414 14:58:42.338548 1223212 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:58:42.338563 1223212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:58:42.338604 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:42.338944 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:58:42.338979 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.341964 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.342334 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.342364 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.342515 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.342710 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.342874 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.343062 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:58:42.421434 1223212 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:58:42.425440 1223212 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:58:42.425468 1223212 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:58:42.425530 1223212 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:58:42.425613 1223212 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:58:42.425628 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:58:42.425725 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:58:42.434329 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:58:42.456138 1223212 start.go:296] duration metric: took 117.569647ms for postStartSetup
	I0414 14:58:42.456188 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:42.456524 1223212 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:58:42.456551 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.459140 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.459524 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.459555 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.459687 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.459867 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.460013 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.460149 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:58:42.540976 1223212 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:58:42.541063 1223212 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:58:42.596577 1223212 fix.go:56] duration metric: took 20.733082448s for fixHost
	I0414 14:58:42.596647 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.599896 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.600323 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.600353 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.600556 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.600758 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.600895 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.601026 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.601178 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:42.601396 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:58:42.601406 1223212 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:58:42.695621 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642722.664070450
	
	I0414 14:58:42.695654 1223212 fix.go:216] guest clock: 1744642722.664070450
	I0414 14:58:42.695665 1223212 fix.go:229] Guest: 2025-04-14 14:58:42.66407045 +0000 UTC Remote: 2025-04-14 14:58:42.596616108 +0000 UTC m=+42.728246790 (delta=67.454342ms)
	I0414 14:58:42.695688 1223212 fix.go:200] guest clock delta is within tolerance: 67.454342ms
	I0414 14:58:42.695694 1223212 start.go:83] releasing machines lock for "ha-290859-m02", held for 20.832233241s
	I0414 14:58:42.695719 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:42.696053 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:58:42.698944 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.699376 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.699403 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.701043 1223212 out.go:177] * Found network options:
	I0414 14:58:42.702133 1223212 out.go:177]   - NO_PROXY=192.168.39.110
	W0414 14:58:42.703160 1223212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:58:42.703185 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:42.703723 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:42.703934 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:42.704032 1223212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:58:42.704077 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	W0414 14:58:42.704139 1223212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0414 14:58:42.704216 1223212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 14:58:42.704240 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.706894 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.707126 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.707310 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.707337 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.707464 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.707614 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.707638 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.707641 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.707801 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.707803 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.707991 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:58:42.708065 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.708238 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.708378 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	W0414 14:58:42.806481 1223212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:58:42.806558 1223212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:58:42.821229 1223212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:58:42.821258 1223212 start.go:495] detecting cgroup driver to use...
	I0414 14:58:42.821334 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:58:42.847568 1223212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:58:42.861235 1223212 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:58:42.861309 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:58:42.874366 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:58:42.886734 1223212 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:58:43.013769 1223212 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:58:43.155359 1223212 docker.go:233] disabling docker service ...
	I0414 14:58:43.155439 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:58:43.173270 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:58:43.186540 1223212 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:58:43.324274 1223212 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:58:43.433306 1223212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:58:43.446382 1223212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:58:43.463462 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:58:43.473115 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:58:43.483047 1223212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:58:43.483124 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:58:43.492947 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:58:43.502800 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:58:43.512732 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:58:43.522798 1223212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:58:43.533435 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:58:43.545327 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:58:43.555583 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:58:43.565976 1223212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:58:43.575081 1223212 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:58:43.575152 1223212 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:58:43.587443 1223212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:58:43.596514 1223212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:58:43.705808 1223212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:58:43.737605 1223212 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:58:43.737690 1223212 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:58:43.743606 1223212 retry.go:31] will retry after 1.497760722s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-290859 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-290859 -n ha-290859
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-290859 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-290859 logs -n 25: (1.477584309s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg --          |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- nslookup |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- get pods -o          | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-8bg2x             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC |                     |
	|         | busybox-58667487b6-q9jvx             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg             |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-290859 -- exec                 | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:41 UTC | 14 Apr 25 14:41 UTC |
	|         | busybox-58667487b6-t6bgg -- sh       |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-290859 -v=7                | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node stop m02 -v=7         | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:42 UTC | 14 Apr 25 14:42 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-290859 node start m02 -v=7        | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:43 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-290859 -v=7               | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:48 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-290859 -v=7                    | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:48 UTC | 14 Apr 25 14:51 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-290859 --wait=true -v=7        | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:51 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-290859                    | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:56 UTC |                     |
	| node    | ha-290859 node delete m03 -v=7       | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:56 UTC | 14 Apr 25 14:56 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-290859 stop -v=7                  | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:56 UTC | 14 Apr 25 14:57 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-290859 --wait=true             | ha-290859 | jenkins | v1.35.0 | 14 Apr 25 14:57 UTC |                     |
	|         | -v=7 --alsologtostderr               |           |         |         |                     |                     |
	|         | --driver=kvm2                        |           |         |         |                     |                     |
	|         | --container-runtime=containerd       |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:57:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:57:59.908690 1223212 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:57:59.908993 1223212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:57:59.909003 1223212 out.go:358] Setting ErrFile to fd 2...
	I0414 14:57:59.909007 1223212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:57:59.909197 1223212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:57:59.909730 1223212 out.go:352] Setting JSON to false
	I0414 14:57:59.910784 1223212 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":24023,"bootTime":1744618657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:57:59.910901 1223212 start.go:139] virtualization: kvm guest
	I0414 14:57:59.912952 1223212 out.go:177] * [ha-290859] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:57:59.914339 1223212 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:57:59.914365 1223212 notify.go:220] Checking for updates...
	I0414 14:57:59.916736 1223212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:57:59.918177 1223212 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:57:59.919485 1223212 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:57:59.920708 1223212 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:57:59.921837 1223212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:57:59.923501 1223212 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:57:59.923922 1223212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:57:59.924007 1223212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:57:59.939633 1223212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0414 14:57:59.940115 1223212 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:57:59.940678 1223212 main.go:141] libmachine: Using API Version  1
	I0414 14:57:59.940716 1223212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:57:59.941059 1223212 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:57:59.941244 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:57:59.941520 1223212 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:57:59.941821 1223212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:57:59.941869 1223212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:57:59.957233 1223212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34875
	I0414 14:57:59.957737 1223212 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:57:59.958190 1223212 main.go:141] libmachine: Using API Version  1
	I0414 14:57:59.958214 1223212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:57:59.958531 1223212 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:57:59.958723 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:57:59.993970 1223212 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:57:59.994983 1223212 start.go:297] selected driver: kvm2
	I0414 14:57:59.995000 1223212 start.go:901] validating driver "kvm2" against &{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-29
0859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:57:59.995211 1223212 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:57:59.995687 1223212 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:57:59.995790 1223212 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:58:00.011995 1223212 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:58:00.012701 1223212 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:58:00.012737 1223212 cni.go:84] Creating CNI manager for ""
	I0414 14:58:00.012788 1223212 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0414 14:58:00.012855 1223212 start.go:340] cluster config:
	{Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false
nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:58:00.013016 1223212 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:58:00.015136 1223212 out.go:177] * Starting "ha-290859" primary control-plane node in "ha-290859" cluster
	I0414 14:58:00.016292 1223212 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:58:00.016334 1223212 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:58:00.016345 1223212 cache.go:56] Caching tarball of preloaded images
	I0414 14:58:00.016446 1223212 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:58:00.016459 1223212 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:58:00.016597 1223212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:58:00.016798 1223212 start.go:360] acquireMachinesLock for ha-290859: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:58:00.016846 1223212 start.go:364] duration metric: took 27.263µs to acquireMachinesLock for "ha-290859"
	I0414 14:58:00.016866 1223212 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:58:00.016874 1223212 fix.go:54] fixHost starting: 
	I0414 14:58:00.017155 1223212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:58:00.017213 1223212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:58:00.032664 1223212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35747
	I0414 14:58:00.033250 1223212 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:58:00.033755 1223212 main.go:141] libmachine: Using API Version  1
	I0414 14:58:00.033780 1223212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:58:00.034149 1223212 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:58:00.034367 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:00.034554 1223212 main.go:141] libmachine: (ha-290859) Calling .GetState
	I0414 14:58:00.036208 1223212 fix.go:112] recreateIfNeeded on ha-290859: state=Stopped err=<nil>
	I0414 14:58:00.036248 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	W0414 14:58:00.036397 1223212 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:58:00.038188 1223212 out.go:177] * Restarting existing kvm2 VM for "ha-290859" ...
	I0414 14:58:00.039436 1223212 main.go:141] libmachine: (ha-290859) Calling .Start
	I0414 14:58:00.039637 1223212 main.go:141] libmachine: (ha-290859) starting domain...
	I0414 14:58:00.039661 1223212 main.go:141] libmachine: (ha-290859) ensuring networks are active...
	I0414 14:58:00.040481 1223212 main.go:141] libmachine: (ha-290859) Ensuring network default is active
	I0414 14:58:00.040723 1223212 main.go:141] libmachine: (ha-290859) Ensuring network mk-ha-290859 is active
	I0414 14:58:00.041001 1223212 main.go:141] libmachine: (ha-290859) getting domain XML...
	I0414 14:58:00.041662 1223212 main.go:141] libmachine: (ha-290859) creating domain...
	I0414 14:58:01.250464 1223212 main.go:141] libmachine: (ha-290859) waiting for IP...
	I0414 14:58:01.251552 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:01.251894 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:01.252028 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:01.251923 1223247 retry.go:31] will retry after 221.862556ms: waiting for domain to come up
	I0414 14:58:01.475615 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:01.476157 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:01.476275 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:01.476211 1223247 retry.go:31] will retry after 281.470223ms: waiting for domain to come up
	I0414 14:58:01.759949 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:01.760507 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:01.760530 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:01.760471 1223247 retry.go:31] will retry after 452.37336ms: waiting for domain to come up
	I0414 14:58:02.214146 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:02.214560 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:02.214588 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:02.214522 1223247 retry.go:31] will retry after 404.819056ms: waiting for domain to come up
	I0414 14:58:02.621118 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:02.621581 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:02.621614 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:02.621531 1223247 retry.go:31] will retry after 614.590589ms: waiting for domain to come up
	I0414 14:58:03.237459 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:03.237956 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:03.238007 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:03.237910 1223247 retry.go:31] will retry after 643.121119ms: waiting for domain to come up
	I0414 14:58:03.882822 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:03.883240 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:03.883285 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:03.883214 1223247 retry.go:31] will retry after 1.002645406s: waiting for domain to come up
	I0414 14:58:04.887497 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:04.888012 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:04.888047 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:04.887973 1223247 retry.go:31] will retry after 1.241670442s: waiting for domain to come up
	I0414 14:58:06.131338 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:06.131753 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:06.131782 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:06.131710 1223247 retry.go:31] will retry after 1.35348732s: waiting for domain to come up
	I0414 14:58:07.487277 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:07.487771 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:07.487818 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:07.487730 1223247 retry.go:31] will retry after 1.453121759s: waiting for domain to come up
	I0414 14:58:08.943543 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:08.944076 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:08.944139 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:08.944037 1223247 retry.go:31] will retry after 2.633823626s: waiting for domain to come up
	I0414 14:58:11.579709 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:11.580096 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:11.580130 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:11.580056 1223247 retry.go:31] will retry after 2.536944167s: waiting for domain to come up
	I0414 14:58:14.119779 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:14.120215 1223212 main.go:141] libmachine: (ha-290859) DBG | unable to find current IP address of domain ha-290859 in network mk-ha-290859
	I0414 14:58:14.120294 1223212 main.go:141] libmachine: (ha-290859) DBG | I0414 14:58:14.120145 1223247 retry.go:31] will retry after 3.647827366s: waiting for domain to come up
	I0414 14:58:17.771694 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.772226 1223212 main.go:141] libmachine: (ha-290859) found domain IP: 192.168.39.110
	I0414 14:58:17.772251 1223212 main.go:141] libmachine: (ha-290859) reserving static IP address...
	I0414 14:58:17.772278 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has current primary IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.772678 1223212 main.go:141] libmachine: (ha-290859) reserved static IP address 192.168.39.110 for domain ha-290859
	I0414 14:58:17.772734 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:17.772741 1223212 main.go:141] libmachine: (ha-290859) waiting for SSH...
	I0414 14:58:17.772763 1223212 main.go:141] libmachine: (ha-290859) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859", mac: "52:54:00:be:9f:8b", ip: "192.168.39.110"}
	I0414 14:58:17.772772 1223212 main.go:141] libmachine: (ha-290859) DBG | Getting to WaitForSSH function...
	I0414 14:58:17.774969 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.775382 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:17.775403 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.775553 1223212 main.go:141] libmachine: (ha-290859) DBG | Using SSH client type: external
	I0414 14:58:17.775602 1223212 main.go:141] libmachine: (ha-290859) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa (-rw-------)
	I0414 14:58:17.775636 1223212 main.go:141] libmachine: (ha-290859) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:58:17.775652 1223212 main.go:141] libmachine: (ha-290859) DBG | About to run SSH command:
	I0414 14:58:17.775669 1223212 main.go:141] libmachine: (ha-290859) DBG | exit 0
	I0414 14:58:17.903246 1223212 main.go:141] libmachine: (ha-290859) DBG | SSH cmd err, output: <nil>: 
	I0414 14:58:17.903656 1223212 main.go:141] libmachine: (ha-290859) Calling .GetConfigRaw
	I0414 14:58:17.904372 1223212 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:58:17.907063 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.907435 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:17.907457 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.907692 1223212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:58:17.907888 1223212 machine.go:93] provisionDockerMachine start ...
	I0414 14:58:17.907919 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:17.908126 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:17.910271 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.910626 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:17.910671 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:17.910737 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:17.910909 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:17.911086 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:17.911243 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:17.911462 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:17.911706 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:58:17.911722 1223212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:58:18.023283 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:58:18.023320 1223212 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:58:18.023571 1223212 buildroot.go:166] provisioning hostname "ha-290859"
	I0414 14:58:18.023599 1223212 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:58:18.023805 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.026202 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.026519 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.026551 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.026756 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.026939 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.027108 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.027324 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.027544 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:18.027750 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:58:18.027761 1223212 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859 && echo "ha-290859" | sudo tee /etc/hostname
	I0414 14:58:18.152016 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859
	
	I0414 14:58:18.152050 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.154987 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.155441 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.155474 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.155648 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.155851 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.156014 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.156225 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.156390 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:18.156615 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:58:18.156636 1223212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:58:18.279233 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:58:18.279292 1223212 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:58:18.279314 1223212 buildroot.go:174] setting up certificates
	I0414 14:58:18.279325 1223212 provision.go:84] configureAuth start
	I0414 14:58:18.279335 1223212 main.go:141] libmachine: (ha-290859) Calling .GetMachineName
	I0414 14:58:18.279683 1223212 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:58:18.282508 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.282868 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.282892 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.283056 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.285526 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.285895 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.285936 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.286056 1223212 provision.go:143] copyHostCerts
	I0414 14:58:18.286090 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:58:18.286129 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:58:18.286160 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:58:18.286238 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:58:18.286356 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:58:18.286383 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:58:18.286390 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:58:18.286436 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:58:18.286522 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:58:18.286544 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:58:18.286550 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:58:18.286587 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:58:18.286777 1223212 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859 san=[127.0.0.1 192.168.39.110 ha-290859 localhost minikube]
	I0414 14:58:18.404382 1223212 provision.go:177] copyRemoteCerts
	I0414 14:58:18.404469 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:58:18.404506 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.407083 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.407459 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.407486 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.407697 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.407905 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.408063 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.408215 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:58:18.492950 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:58:18.493034 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:58:18.514977 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:58:18.515064 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:58:18.535925 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:58:18.535992 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0414 14:58:18.556259 1223212 provision.go:87] duration metric: took 276.918906ms to configureAuth
	I0414 14:58:18.556283 1223212 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:58:18.556535 1223212 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:58:18.556551 1223212 machine.go:96] duration metric: took 648.650265ms to provisionDockerMachine
	I0414 14:58:18.556561 1223212 start.go:293] postStartSetup for "ha-290859" (driver="kvm2")
	I0414 14:58:18.556576 1223212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:58:18.556625 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.556990 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:58:18.557040 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.559442 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.559758 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.559790 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.559961 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.560142 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.560322 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.560472 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:58:18.645532 1223212 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:58:18.649197 1223212 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:58:18.649221 1223212 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:58:18.649296 1223212 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:58:18.649398 1223212 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:58:18.649410 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:58:18.649512 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:58:18.658292 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:58:18.679212 1223212 start.go:296] duration metric: took 122.634461ms for postStartSetup
	I0414 14:58:18.679269 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.679626 1223212 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:58:18.679661 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.682445 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.682838 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.682859 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.683027 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.683228 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.683420 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.683626 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:58:18.773347 1223212 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:58:18.773415 1223212 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:58:18.812041 1223212 fix.go:56] duration metric: took 18.795158701s for fixHost
	I0414 14:58:18.812091 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.815201 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.815656 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.815683 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.815945 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.816172 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.816350 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.816491 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.816651 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:18.816863 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0414 14:58:18.816872 1223212 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:58:18.931588 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642698.905353914
	
	I0414 14:58:18.931629 1223212 fix.go:216] guest clock: 1744642698.905353914
	I0414 14:58:18.931638 1223212 fix.go:229] Guest: 2025-04-14 14:58:18.905353914 +0000 UTC Remote: 2025-04-14 14:58:18.812072502 +0000 UTC m=+18.943703181 (delta=93.281412ms)
	I0414 14:58:18.931658 1223212 fix.go:200] guest clock delta is within tolerance: 93.281412ms
	I0414 14:58:18.931663 1223212 start.go:83] releasing machines lock for "ha-290859", held for 18.91480633s
	I0414 14:58:18.931683 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.931990 1223212 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:58:18.934457 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.934814 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.934836 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.934999 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.935428 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.935599 1223212 main.go:141] libmachine: (ha-290859) Calling .DriverName
	I0414 14:58:18.935705 1223212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:58:18.935762 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.935859 1223212 ssh_runner.go:195] Run: cat /version.json
	I0414 14:58:18.935876 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHHostname
	I0414 14:58:18.938492 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.938853 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.938888 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.938907 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.939020 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.939200 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.939356 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.939491 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:18.939514 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:18.939594 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:58:18.939704 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHPort
	I0414 14:58:18.939854 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHKeyPath
	I0414 14:58:18.940017 1223212 main.go:141] libmachine: (ha-290859) Calling .GetSSHUsername
	I0414 14:58:18.940204 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859/id_rsa Username:docker}
	I0414 14:58:19.049558 1223212 ssh_runner.go:195] Run: systemctl --version
	I0414 14:58:19.055598 1223212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:58:19.060748 1223212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:58:19.060807 1223212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:58:19.075760 1223212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:58:19.075796 1223212 start.go:495] detecting cgroup driver to use...
	I0414 14:58:19.075857 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0414 14:58:19.105781 1223212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0414 14:58:19.118798 1223212 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:58:19.118856 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:58:19.131627 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:58:19.144090 1223212 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:58:19.258491 1223212 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:58:19.395084 1223212 docker.go:233] disabling docker service ...
	I0414 14:58:19.395173 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:58:19.408548 1223212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:58:19.421029 1223212 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:58:19.555293 1223212 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:58:19.666947 1223212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:58:19.679686 1223212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:58:19.695999 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0414 14:58:19.705372 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0414 14:58:19.714794 1223212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0414 14:58:19.714866 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0414 14:58:19.724385 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:58:19.733717 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0414 14:58:19.742978 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0414 14:58:19.752264 1223212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:58:19.762082 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0414 14:58:19.771857 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0414 14:58:19.781591 1223212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0414 14:58:19.791123 1223212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:58:19.799649 1223212 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:58:19.799703 1223212 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:58:19.811997 1223212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:58:19.820445 1223212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:58:19.932516 1223212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0414 14:58:19.960151 1223212 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0414 14:58:19.960240 1223212 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:58:19.964906 1223212 retry.go:31] will retry after 786.537519ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0414 14:58:20.752230 1223212 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0414 14:58:20.757280 1223212 start.go:563] Will wait 60s for crictl version
	I0414 14:58:20.757337 1223212 ssh_runner.go:195] Run: which crictl
	I0414 14:58:20.760924 1223212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:58:20.797226 1223212 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0414 14:58:20.797293 1223212 ssh_runner.go:195] Run: containerd --version
	I0414 14:58:20.822463 1223212 ssh_runner.go:195] Run: containerd --version
	I0414 14:58:20.844657 1223212 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.23 ...
	I0414 14:58:20.845956 1223212 main.go:141] libmachine: (ha-290859) Calling .GetIP
	I0414 14:58:20.848590 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:20.848907 1223212 main.go:141] libmachine: (ha-290859) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:9f:8b", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:10 +0000 UTC Type:0 Mac:52:54:00:be:9f:8b Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-290859 Clientid:01:52:54:00:be:9f:8b}
	I0414 14:58:20.848937 1223212 main.go:141] libmachine: (ha-290859) DBG | domain ha-290859 has defined IP address 192.168.39.110 and MAC address 52:54:00:be:9f:8b in network mk-ha-290859
	I0414 14:58:20.849127 1223212 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:58:20.852831 1223212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:58:20.865011 1223212 kubeadm.go:883] updating cluster {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:fals
e metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:58:20.865148 1223212 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:58:20.865196 1223212 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:58:20.896286 1223212 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:58:20.896310 1223212 containerd.go:534] Images already preloaded, skipping extraction
	I0414 14:58:20.896363 1223212 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:58:20.926523 1223212 containerd.go:627] all images are preloaded for containerd runtime.
	I0414 14:58:20.926548 1223212 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:58:20.926563 1223212 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.32.2 containerd true true} ...
	I0414 14:58:20.926675 1223212 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-290859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:58:20.926741 1223212 ssh_runner.go:195] Run: sudo crictl info
	I0414 14:58:20.957700 1223212 cni.go:84] Creating CNI manager for ""
	I0414 14:58:20.957723 1223212 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0414 14:58:20.957737 1223212 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:58:20.957757 1223212 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-290859 NodeName:ha-290859 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:58:20.957864 1223212 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-290859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:58:20.957885 1223212 kube-vip.go:115] generating kube-vip config ...
	I0414 14:58:20.957935 1223212 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0414 14:58:20.980561 1223212 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0414 14:58:20.980679 1223212 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0414 14:58:20.980734 1223212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:58:20.992163 1223212 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:58:20.992242 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0414 14:58:21.000726 1223212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0414 14:58:21.016373 1223212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:58:21.031630 1223212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 14:58:21.046888 1223212 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0414 14:58:21.062383 1223212 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0414 14:58:21.065785 1223212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:58:21.076490 1223212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:58:21.181513 1223212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:58:21.198460 1223212 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859 for IP: 192.168.39.110
	I0414 14:58:21.198484 1223212 certs.go:194] generating shared ca certs ...
	I0414 14:58:21.198507 1223212 certs.go:226] acquiring lock for ca certs: {Name:mk7215406b4c41badf9eca6bf9f1036fd88f670e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:58:21.198675 1223212 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key
	I0414 14:58:21.198746 1223212 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key
	I0414 14:58:21.198770 1223212 certs.go:256] generating profile certs ...
	I0414 14:58:21.198895 1223212 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key
	I0414 14:58:21.198988 1223212 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key.c955092d
	I0414 14:58:21.199060 1223212 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key
	I0414 14:58:21.199084 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0414 14:58:21.199106 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0414 14:58:21.199124 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0414 14:58:21.199142 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0414 14:58:21.199160 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0414 14:58:21.199187 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0414 14:58:21.199220 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0414 14:58:21.199240 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0414 14:58:21.199340 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem (1338 bytes)
	W0414 14:58:21.199389 1223212 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639_empty.pem, impossibly tiny 0 bytes
	I0414 14:58:21.199405 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 14:58:21.199443 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:58:21.199480 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:58:21.199516 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem (1675 bytes)
	I0414 14:58:21.199569 1223212 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:58:21.199619 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem -> /usr/share/ca-certificates/1203639.pem
	I0414 14:58:21.199644 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /usr/share/ca-certificates/12036392.pem
	I0414 14:58:21.199662 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:58:21.200312 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:58:21.245036 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:58:21.270226 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:58:21.301299 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:58:21.329835 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 14:58:21.357424 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:58:21.381219 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:58:21.405398 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 14:58:21.441670 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/1203639.pem --> /usr/share/ca-certificates/1203639.pem (1338 bytes)
	I0414 14:58:21.480779 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /usr/share/ca-certificates/12036392.pem (1708 bytes)
	I0414 14:58:21.531604 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:58:21.571382 1223212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:58:21.594634 1223212 ssh_runner.go:195] Run: openssl version
	I0414 14:58:21.600389 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1203639.pem && ln -fs /usr/share/ca-certificates/1203639.pem /etc/ssl/certs/1203639.pem"
	I0414 14:58:21.611391 1223212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1203639.pem
	I0414 14:58:21.615987 1223212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1203639.pem
	I0414 14:58:21.616052 1223212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1203639.pem
	I0414 14:58:21.622246 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1203639.pem /etc/ssl/certs/51391683.0"
	I0414 14:58:21.642602 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12036392.pem && ln -fs /usr/share/ca-certificates/12036392.pem /etc/ssl/certs/12036392.pem"
	I0414 14:58:21.660916 1223212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12036392.pem
	I0414 14:58:21.665366 1223212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/12036392.pem
	I0414 14:58:21.665425 1223212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12036392.pem
	I0414 14:58:21.672239 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12036392.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 14:58:21.687862 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:58:21.701990 1223212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:58:21.707086 1223212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:58:21.707157 1223212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:58:21.713255 1223212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:58:21.729614 1223212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:58:21.736937 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 14:58:21.745014 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 14:58:21.751550 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 14:58:21.758220 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 14:58:21.766030 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 14:58:21.771700 1223212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 14:58:21.776978 1223212 kubeadm.go:392] StartCluster: {Name:ha-290859 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-290859 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false m
etallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:58:21.777094 1223212 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0414 14:58:21.777176 1223212 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:58:21.809866 1223212 cri.go:89] found id: "ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26"
	I0414 14:58:21.809897 1223212 cri.go:89] found id: "d9bf8cef6e9551ba044bfa75d53bebdabf94a544fb35bcba8ae9dda955c97297"
	I0414 14:58:21.809902 1223212 cri.go:89] found id: "c3c2f4d5fe419392ff3850394da92847c7bcfe369f4d0eddffd38c2a59b41025"
	I0414 14:58:21.809906 1223212 cri.go:89] found id: "607041fc2f4edc17de3caec2d00a9f9b49a94ed154254da72ec094a0f148db36"
	I0414 14:58:21.809910 1223212 cri.go:89] found id: "1c01d86a74294bbfd5f487ec85ffc0f35cc4b979ad90c940eea5a17a8e5f46fb"
	I0414 14:58:21.809921 1223212 cri.go:89] found id: "e8658abcccb8b10d531ad775050d96f3375e484efcbaba4d5509a7a22f3608a9"
	I0414 14:58:21.809926 1223212 cri.go:89] found id: "29445064369e58250458efcfeed9a28e6da75ce4bcb6f15c9e58844eb1ba811e"
	I0414 14:58:21.809929 1223212 cri.go:89] found id: "6bb8bbfa1b317897b9bcc96ba49e7c68f83cc4409dd69a72b86f0448aa2519ea"
	I0414 14:58:21.809934 1223212 cri.go:89] found id: "00b109770be1cb3d772b7d440ccc36c098a8627e8280f195c263a0a87a6e0c07"
	I0414 14:58:21.809941 1223212 cri.go:89] found id: "6dc42b262abf6aa5624bcc0028b6e34ab24ddcaffd2215fc0ae3cc2554bd37e7"
	I0414 14:58:21.809946 1223212 cri.go:89] found id: ""
	I0414 14:58:21.809998 1223212 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0414 14:58:21.823102 1223212 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-14T14:58:21Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0414 14:58:21.823202 1223212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:58:21.832381 1223212 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 14:58:21.832400 1223212 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 14:58:21.832444 1223212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 14:58:21.841384 1223212 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:58:21.841800 1223212 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-290859" does not appear in /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:58:21.841959 1223212 kubeconfig.go:62] /home/jenkins/minikube-integration/20512-1196368/kubeconfig needs updating (will repair): [kubeconfig missing "ha-290859" cluster setting kubeconfig missing "ha-290859" context setting]
	I0414 14:58:21.842303 1223212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:58:21.842683 1223212 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:58:21.842848 1223212 kapi.go:59] client config for ha-290859: &rest.Config{Host:"https://192.168.39.110:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1196368/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 14:58:21.843405 1223212 cert_rotation.go:140] Starting client certificate rotation controller
	I0414 14:58:21.843359 1223212 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 14:58:21.843746 1223212 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 14:58:21.843757 1223212 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 14:58:21.843768 1223212 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 14:58:21.844495 1223212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 14:58:21.853774 1223212 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0414 14:58:21.853792 1223212 kubeadm.go:597] duration metric: took 21.386394ms to restartPrimaryControlPlane
	I0414 14:58:21.853799 1223212 kubeadm.go:394] duration metric: took 76.837265ms to StartCluster
	I0414 14:58:21.853812 1223212 settings.go:142] acquiring lock: {Name:mk41907a6d0da0bb56b7cd58b5d8065ec36ecc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:58:21.853868 1223212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:58:21.854394 1223212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1196368/kubeconfig: {Name:mkeb969af3beabfdafe344f27031959a97621135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:58:21.854586 1223212 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0414 14:58:21.854607 1223212 start.go:241] waiting for startup goroutines ...
	I0414 14:58:21.854630 1223212 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 14:58:21.854791 1223212 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:58:21.857034 1223212 out.go:177] * Enabled addons: 
	I0414 14:58:21.858001 1223212 addons.go:514] duration metric: took 3.389634ms for enable addons: enabled=[]
	I0414 14:58:21.858033 1223212 start.go:246] waiting for cluster config update ...
	I0414 14:58:21.858046 1223212 start.go:255] writing updated cluster config ...
	I0414 14:58:21.859392 1223212 out.go:201] 
	I0414 14:58:21.860612 1223212 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:58:21.860707 1223212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:58:21.862060 1223212 out.go:177] * Starting "ha-290859-m02" control-plane node in "ha-290859" cluster
	I0414 14:58:21.863006 1223212 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:58:21.863027 1223212 cache.go:56] Caching tarball of preloaded images
	I0414 14:58:21.863123 1223212 preload.go:172] Found /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0414 14:58:21.863134 1223212 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0414 14:58:21.863217 1223212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:58:21.863406 1223212 start.go:360] acquireMachinesLock for ha-290859-m02: {Name:mk496006d22a0565bb9e0d565e1b3cb0cf0971cd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:58:21.863451 1223212 start.go:364] duration metric: took 26.83µs to acquireMachinesLock for "ha-290859-m02"
	I0414 14:58:21.863466 1223212 start.go:96] Skipping create...Using existing machine configuration
	I0414 14:58:21.863473 1223212 fix.go:54] fixHost starting: m02
	I0414 14:58:21.863734 1223212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:58:21.863768 1223212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:58:21.878965 1223212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0414 14:58:21.879467 1223212 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:58:21.879953 1223212 main.go:141] libmachine: Using API Version  1
	I0414 14:58:21.879973 1223212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:58:21.880327 1223212 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:58:21.880531 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:21.880714 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetState
	I0414 14:58:21.882037 1223212 fix.go:112] recreateIfNeeded on ha-290859-m02: state=Stopped err=<nil>
	I0414 14:58:21.882056 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	W0414 14:58:21.882236 1223212 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 14:58:21.884404 1223212 out.go:177] * Restarting existing kvm2 VM for "ha-290859-m02" ...
	I0414 14:58:21.885489 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .Start
	I0414 14:58:21.885650 1223212 main.go:141] libmachine: (ha-290859-m02) starting domain...
	I0414 14:58:21.885665 1223212 main.go:141] libmachine: (ha-290859-m02) ensuring networks are active...
	I0414 14:58:21.886323 1223212 main.go:141] libmachine: (ha-290859-m02) Ensuring network default is active
	I0414 14:58:21.886645 1223212 main.go:141] libmachine: (ha-290859-m02) Ensuring network mk-ha-290859 is active
	I0414 14:58:21.886949 1223212 main.go:141] libmachine: (ha-290859-m02) getting domain XML...
	I0414 14:58:21.887589 1223212 main.go:141] libmachine: (ha-290859-m02) creating domain...
	I0414 14:58:23.097121 1223212 main.go:141] libmachine: (ha-290859-m02) waiting for IP...
	I0414 14:58:23.098035 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:23.098452 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:23.098564 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:23.098455 1223387 retry.go:31] will retry after 233.237841ms: waiting for domain to come up
	I0414 14:58:23.332867 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:23.333383 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:23.333400 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:23.333351 1223387 retry.go:31] will retry after 381.899222ms: waiting for domain to come up
	I0414 14:58:23.716962 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:23.717333 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:23.717359 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:23.717318 1223387 retry.go:31] will retry after 412.191626ms: waiting for domain to come up
	I0414 14:58:24.130877 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:24.131406 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:24.131437 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:24.131372 1223387 retry.go:31] will retry after 414.091417ms: waiting for domain to come up
	I0414 14:58:24.547112 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:24.547626 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:24.547654 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:24.547593 1223387 retry.go:31] will retry after 644.002595ms: waiting for domain to come up
	I0414 14:58:25.193608 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:25.194062 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:25.194112 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:25.194018 1223387 retry.go:31] will retry after 830.541478ms: waiting for domain to come up
	I0414 14:58:26.026072 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:26.026545 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:26.026574 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:26.026503 1223387 retry.go:31] will retry after 1.141092073s: waiting for domain to come up
	I0414 14:58:27.169323 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:27.169749 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:27.169775 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:27.169715 1223387 retry.go:31] will retry after 1.081212512s: waiting for domain to come up
	I0414 14:58:28.252530 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:28.252969 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:28.253063 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:28.252956 1223387 retry.go:31] will retry after 1.510553531s: waiting for domain to come up
	I0414 14:58:29.764716 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:29.765248 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:29.765280 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:29.765202 1223387 retry.go:31] will retry after 1.415152488s: waiting for domain to come up
	I0414 14:58:31.182558 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:31.183006 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:31.183044 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:31.182956 1223387 retry.go:31] will retry after 2.534892478s: waiting for domain to come up
	I0414 14:58:33.720236 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:33.720529 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:33.720554 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:33.720505 1223387 retry.go:31] will retry after 3.490878268s: waiting for domain to come up
	I0414 14:58:37.213273 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:37.213780 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | unable to find current IP address of domain ha-290859-m02 in network mk-ha-290859
	I0414 14:58:37.213808 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | I0414 14:58:37.213759 1223387 retry.go:31] will retry after 4.456200887s: waiting for domain to come up
	I0414 14:58:41.675426 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.675912 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.675933 1223212 main.go:141] libmachine: (ha-290859-m02) found domain IP: 192.168.39.111
	I0414 14:58:41.675974 1223212 main.go:141] libmachine: (ha-290859-m02) reserving static IP address...
	I0414 14:58:41.676508 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:41.676570 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | skip adding static IP to network mk-ha-290859 - found existing host DHCP lease matching {name: "ha-290859-m02", mac: "52:54:00:f0:fd:94", ip: "192.168.39.111"}
	I0414 14:58:41.676591 1223212 main.go:141] libmachine: (ha-290859-m02) reserved static IP address 192.168.39.111 for domain ha-290859-m02
	I0414 14:58:41.676605 1223212 main.go:141] libmachine: (ha-290859-m02) waiting for SSH...
	I0414 14:58:41.676613 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | Getting to WaitForSSH function...
	I0414 14:58:41.678641 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.678981 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:41.679009 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.679153 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH client type: external
	I0414 14:58:41.679193 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa (-rw-------)
	I0414 14:58:41.679227 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:58:41.679240 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | About to run SSH command:
	I0414 14:58:41.679273 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | exit 0
	I0414 14:58:41.799010 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | SSH cmd err, output: <nil>: 
	I0414 14:58:41.799442 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetConfigRaw
	I0414 14:58:41.800097 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:58:41.802533 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.802965 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:41.803004 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.803343 1223212 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/ha-290859/config.json ...
	I0414 14:58:41.803578 1223212 machine.go:93] provisionDockerMachine start ...
	I0414 14:58:41.803601 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:41.803838 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:41.806135 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.806485 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:41.806515 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.806666 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:41.806832 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:41.806988 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:41.807098 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:41.807231 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:41.807445 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:58:41.807455 1223212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 14:58:41.903915 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 14:58:41.903952 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:58:41.904210 1223212 buildroot.go:166] provisioning hostname "ha-290859-m02"
	I0414 14:58:41.904246 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:58:41.904515 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:41.907832 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.908302 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:41.908340 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:41.908525 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:41.908726 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:41.908870 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:41.908993 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:41.909172 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:41.909456 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:58:41.909476 1223212 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-290859-m02 && echo "ha-290859-m02" | sudo tee /etc/hostname
	I0414 14:58:42.022769 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-290859-m02
	
	I0414 14:58:42.022799 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.025802 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.026202 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.026236 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.026466 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.026685 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.026852 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.026987 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.027137 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:42.027415 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:58:42.027436 1223212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-290859-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-290859-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-290859-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:58:42.131347 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:58:42.131386 1223212 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1196368/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1196368/.minikube}
	I0414 14:58:42.131407 1223212 buildroot.go:174] setting up certificates
	I0414 14:58:42.131419 1223212 provision.go:84] configureAuth start
	I0414 14:58:42.131436 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetMachineName
	I0414 14:58:42.131786 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:58:42.134732 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.135112 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.135145 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.135324 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.137944 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.138395 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.138430 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.138552 1223212 provision.go:143] copyHostCerts
	I0414 14:58:42.138580 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:58:42.138615 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem, removing ...
	I0414 14:58:42.138623 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem
	I0414 14:58:42.138676 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/ca.pem (1082 bytes)
	I0414 14:58:42.138749 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:58:42.138766 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem, removing ...
	I0414 14:58:42.138772 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem
	I0414 14:58:42.138790 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/cert.pem (1123 bytes)
	I0414 14:58:42.138830 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:58:42.138846 1223212 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem, removing ...
	I0414 14:58:42.138852 1223212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem
	I0414 14:58:42.138869 1223212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1196368/.minikube/key.pem (1675 bytes)
	I0414 14:58:42.138915 1223212 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca-key.pem org=jenkins.ha-290859-m02 san=[127.0.0.1 192.168.39.111 ha-290859-m02 localhost minikube]
	I0414 14:58:42.180132 1223212 provision.go:177] copyRemoteCerts
	I0414 14:58:42.180196 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:58:42.180229 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.183220 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.183709 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.183744 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.183976 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.184199 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.184398 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.184547 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:58:42.261124 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0414 14:58:42.261193 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:58:42.286821 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0414 14:58:42.286907 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:58:42.312791 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0414 14:58:42.312861 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 14:58:42.338189 1223212 provision.go:87] duration metric: took 206.752494ms to configureAuth
	I0414 14:58:42.338230 1223212 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:58:42.338511 1223212 config.go:182] Loaded profile config "ha-290859": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:58:42.338534 1223212 machine.go:96] duration metric: took 534.941447ms to provisionDockerMachine
	I0414 14:58:42.338548 1223212 start.go:293] postStartSetup for "ha-290859-m02" (driver="kvm2")
	I0414 14:58:42.338563 1223212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:58:42.338604 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:42.338944 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:58:42.338979 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.341964 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.342334 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.342364 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.342515 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.342710 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.342874 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.343062 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:58:42.421434 1223212 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:58:42.425440 1223212 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:58:42.425468 1223212 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/addons for local assets ...
	I0414 14:58:42.425530 1223212 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1196368/.minikube/files for local assets ...
	I0414 14:58:42.425613 1223212 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> 12036392.pem in /etc/ssl/certs
	I0414 14:58:42.425628 1223212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem -> /etc/ssl/certs/12036392.pem
	I0414 14:58:42.425725 1223212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 14:58:42.434329 1223212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/ssl/certs/12036392.pem --> /etc/ssl/certs/12036392.pem (1708 bytes)
	I0414 14:58:42.456138 1223212 start.go:296] duration metric: took 117.569647ms for postStartSetup
	I0414 14:58:42.456188 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:42.456524 1223212 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0414 14:58:42.456551 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.459140 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.459524 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.459555 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.459687 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.459867 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.460013 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.460149 1223212 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/ha-290859-m02/id_rsa Username:docker}
	I0414 14:58:42.540976 1223212 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0414 14:58:42.541063 1223212 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0414 14:58:42.596577 1223212 fix.go:56] duration metric: took 20.733082448s for fixHost
	I0414 14:58:42.596647 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHHostname
	I0414 14:58:42.599896 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.600323 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.600353 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.600556 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHPort
	I0414 14:58:42.600758 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.600895 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHKeyPath
	I0414 14:58:42.601026 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetSSHUsername
	I0414 14:58:42.601178 1223212 main.go:141] libmachine: Using SSH client type: native
	I0414 14:58:42.601396 1223212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0414 14:58:42.601406 1223212 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:58:42.695621 1223212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744642722.664070450
	
	I0414 14:58:42.695654 1223212 fix.go:216] guest clock: 1744642722.664070450
	I0414 14:58:42.695665 1223212 fix.go:229] Guest: 2025-04-14 14:58:42.66407045 +0000 UTC Remote: 2025-04-14 14:58:42.596616108 +0000 UTC m=+42.728246790 (delta=67.454342ms)
	I0414 14:58:42.695688 1223212 fix.go:200] guest clock delta is within tolerance: 67.454342ms
	I0414 14:58:42.695694 1223212 start.go:83] releasing machines lock for "ha-290859-m02", held for 20.832233241s
	I0414 14:58:42.695719 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .DriverName
	I0414 14:58:42.696053 1223212 main.go:141] libmachine: (ha-290859-m02) Calling .GetIP
	I0414 14:58:42.698944 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.699376 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:94", ip: ""} in network mk-ha-290859: {Iface:virbr1 ExpiryTime:2025-04-14 15:58:32 +0000 UTC Type:0 Mac:52:54:00:f0:fd:94 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-290859-m02 Clientid:01:52:54:00:f0:fd:94}
	I0414 14:58:42.699403 1223212 main.go:141] libmachine: (ha-290859-m02) DBG | domain ha-290859-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:f0:fd:94 in network mk-ha-290859
	I0414 14:58:42.701043 1223212 out.go:177] * Found network options:
	I0414 14:58:42.702133 1223212 out.go:177]   - NO_PROXY=192.168.39.110
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d21d3749d9531       c69fa2e9cbf5f       8 seconds ago       Running             coredns                   2                   112430fd476b5       coredns-668d6bf9bc-qnl6q
	a53ff9bb85d8f       6e38f40d628db       8 seconds ago       Exited              storage-provisioner       3                   dcc78f1964a9f       storage-provisioner
	cd2446ef37b3a       df3849d954c98       8 seconds ago       Running             kindnet-cni               2                   ecf0f98f2e3a4       kindnet-hm99t
	0f27e1896d7d8       c69fa2e9cbf5f       8 seconds ago       Running             coredns                   2                   62da6f800f914       coredns-668d6bf9bc-wbn4p
	158ea60bab73e       8c811b4aec35f       8 seconds ago       Running             busybox                   2                   2902de9175572       busybox-58667487b6-t6bgg
	98b44c2cfa7a4       f1332858868e1       8 seconds ago       Running             kube-proxy                2                   b72d650a18324       kube-proxy-cg945
	a214f7716389a       6ff023a402a69       17 seconds ago      Running             kube-vip                  1                   e45f783a2f148       kube-vip-ha-290859
	75d56975ac1d1       85b7a174738ba       17 seconds ago      Running             kube-apiserver            2                   7753a5d5b8917       kube-apiserver-ha-290859
	59850cb50c6ed       b6a454c5a800d       17 seconds ago      Running             kube-controller-manager   2                   703e4c4f55c11       kube-controller-manager-ha-290859
	d0953cb083c9a       d8e673e7c9983       17 seconds ago      Running             kube-scheduler            2                   7faf3816a92cd       kube-scheduler-ha-290859
	1b6ed0e1da787       a9e7e6b294baf       17 seconds ago      Running             etcd                      2                   5289919206332       etcd-ha-290859
	6def8b5e81c3c       8c811b4aec35f       6 minutes ago       Exited              busybox                   1                   8810167e1850b       busybox-58667487b6-t6bgg
	d9bf8cef6e955       c69fa2e9cbf5f       6 minutes ago       Exited              coredns                   1                   ae09d1f35f5bb       coredns-668d6bf9bc-wbn4p
	c3c2f4d5fe419       c69fa2e9cbf5f       6 minutes ago       Exited              coredns                   1                   8b812c2dfd4e4       coredns-668d6bf9bc-qnl6q
	607041fc2f4ed       df3849d954c98       6 minutes ago       Exited              kindnet-cni               1                   4c291c3e02236       kindnet-hm99t
	1c01d86a74294       f1332858868e1       6 minutes ago       Exited              kube-proxy                1                   756822c1e13ce       kube-proxy-cg945
	e8658abcccb8b       b6a454c5a800d       6 minutes ago       Exited              kube-controller-manager   1                   b171c03689d46       kube-controller-manager-ha-290859
	29445064369e5       d8e673e7c9983       6 minutes ago       Exited              kube-scheduler            1                   6e1304537402c       kube-scheduler-ha-290859
	6bb8bbfa1b317       a9e7e6b294baf       6 minutes ago       Exited              etcd                      1                   d32dfc76a4340       etcd-ha-290859
	00b109770be1c       85b7a174738ba       6 minutes ago       Exited              kube-apiserver            1                   eb5666eae29e1       kube-apiserver-ha-290859
	6dc42b262abf6       6ff023a402a69       6 minutes ago       Exited              kube-vip                  0                   c4bd0bf012eaf       kube-vip-ha-290859
	
	
	==> containerd <==
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.700372592Z" level=info msg="CreateContainer within sandbox \"112430fd476b52a24bda22908b8c17f68481d8c8f886ed297ea07c74b6fefa55\" for container &ContainerMetadata{Name:coredns,Attempt:2,}"
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.708908562Z" level=info msg="CreateContainer within sandbox \"dcc78f1964a9f148b95502dbe5d67c17a419d8e62770ad4aca45fd024776b486\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:3,}"
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.758856364Z" level=info msg="CreateContainer within sandbox \"b72d650a18324bd3f71dd1d1e61c805b145df54c9a41b1322b1b003a8db5fc4a\" for &ContainerMetadata{Name:kube-proxy,Attempt:2,} returns container id \"98b44c2cfa7a4b06ab38737bc7dc36f53b22ded77e9c27f9809dbb9f9fe5c324\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.761404146Z" level=info msg="StartContainer for \"98b44c2cfa7a4b06ab38737bc7dc36f53b22ded77e9c27f9809dbb9f9fe5c324\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.772305684Z" level=info msg="CreateContainer within sandbox \"2902de9175572805cab0a3044b1fa287936ba29357df09927d07f19dd2c775ea\" for &ContainerMetadata{Name:busybox,Attempt:2,} returns container id \"158ea60bab73ec2e79049ac3e39a8ebb9428d321bacc3135af8dffdc637c8e5e\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.774696337Z" level=info msg="StartContainer for \"158ea60bab73ec2e79049ac3e39a8ebb9428d321bacc3135af8dffdc637c8e5e\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.775271688Z" level=info msg="CreateContainer within sandbox \"62da6f800f914c770408c7ebd52f1a8a54ce35b315d760611b89b44e21016bd2\" for &ContainerMetadata{Name:coredns,Attempt:2,} returns container id \"0f27e1896d7d8affdf5ea3f3d689621317ac5dd614efdec9760ed999a671d09b\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.776037861Z" level=info msg="StartContainer for \"0f27e1896d7d8affdf5ea3f3d689621317ac5dd614efdec9760ed999a671d09b\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.797325387Z" level=info msg="CreateContainer within sandbox \"ecf0f98f2e3a48290adcdf0a4f244a615ad3bda8d85a4c5ed6711bddc4b9216c\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"cd2446ef37b3a94be1a9c4475f0ac47563d2eed5fc9e82860e886726a07d3c9d\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.798340672Z" level=info msg="CreateContainer within sandbox \"dcc78f1964a9f148b95502dbe5d67c17a419d8e62770ad4aca45fd024776b486\" for &ContainerMetadata{Name:storage-provisioner,Attempt:3,} returns container id \"a53ff9bb85d8f802b5183370fb36599bcf001a6d3d9d84fc3b698195021d436e\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.800252392Z" level=info msg="StartContainer for \"cd2446ef37b3a94be1a9c4475f0ac47563d2eed5fc9e82860e886726a07d3c9d\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.802441071Z" level=info msg="CreateContainer within sandbox \"112430fd476b52a24bda22908b8c17f68481d8c8f886ed297ea07c74b6fefa55\" for &ContainerMetadata{Name:coredns,Attempt:2,} returns container id \"d21d3749d95316043a2048ebc2e089fa0f11d073272994afdc259f70b2a8b4b2\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.802969011Z" level=info msg="StartContainer for \"a53ff9bb85d8f802b5183370fb36599bcf001a6d3d9d84fc3b698195021d436e\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.813094763Z" level=info msg="StartContainer for \"d21d3749d95316043a2048ebc2e089fa0f11d073272994afdc259f70b2a8b4b2\""
	Apr 14 14:58:36 ha-290859 containerd[810]: time="2025-04-14T14:58:36.962563738Z" level=info msg="StartContainer for \"0f27e1896d7d8affdf5ea3f3d689621317ac5dd614efdec9760ed999a671d09b\" returns successfully"
	Apr 14 14:58:37 ha-290859 containerd[810]: time="2025-04-14T14:58:37.010457018Z" level=info msg="StartContainer for \"158ea60bab73ec2e79049ac3e39a8ebb9428d321bacc3135af8dffdc637c8e5e\" returns successfully"
	Apr 14 14:58:37 ha-290859 containerd[810]: time="2025-04-14T14:58:37.039986138Z" level=info msg="StartContainer for \"d21d3749d95316043a2048ebc2e089fa0f11d073272994afdc259f70b2a8b4b2\" returns successfully"
	Apr 14 14:58:37 ha-290859 containerd[810]: time="2025-04-14T14:58:37.102101809Z" level=info msg="StartContainer for \"a53ff9bb85d8f802b5183370fb36599bcf001a6d3d9d84fc3b698195021d436e\" returns successfully"
	Apr 14 14:58:37 ha-290859 containerd[810]: time="2025-04-14T14:58:37.111739876Z" level=info msg="StartContainer for \"98b44c2cfa7a4b06ab38737bc7dc36f53b22ded77e9c27f9809dbb9f9fe5c324\" returns successfully"
	Apr 14 14:58:37 ha-290859 containerd[810]: time="2025-04-14T14:58:37.116545859Z" level=info msg="StartContainer for \"cd2446ef37b3a94be1a9c4475f0ac47563d2eed5fc9e82860e886726a07d3c9d\" returns successfully"
	Apr 14 14:58:37 ha-290859 containerd[810]: time="2025-04-14T14:58:37.222558601Z" level=info msg="shim disconnected" id=a53ff9bb85d8f802b5183370fb36599bcf001a6d3d9d84fc3b698195021d436e namespace=k8s.io
	Apr 14 14:58:37 ha-290859 containerd[810]: time="2025-04-14T14:58:37.222809649Z" level=warning msg="cleaning up after shim disconnected" id=a53ff9bb85d8f802b5183370fb36599bcf001a6d3d9d84fc3b698195021d436e namespace=k8s.io
	Apr 14 14:58:37 ha-290859 containerd[810]: time="2025-04-14T14:58:37.222882641Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 14 14:58:37 ha-290859 containerd[810]: time="2025-04-14T14:58:37.717740706Z" level=info msg="RemoveContainer for \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\""
	Apr 14 14:58:37 ha-290859 containerd[810]: time="2025-04-14T14:58:37.723587742Z" level=info msg="RemoveContainer for \"ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26\" returns successfully"
	
	
	==> coredns [0f27e1896d7d8affdf5ea3f3d689621317ac5dd614efdec9760ed999a671d09b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59704 - 15384 "HINFO IN 3055302781125270555.5415609130414620230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011987417s
	
	
	==> coredns [c3c2f4d5fe419392ff3850394da92847c7bcfe369f4d0eddffd38c2a59b41025] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48956 - 43158 "HINFO IN 5542730592661564248.5649616312753148618. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009354162s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1967277509]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30002ms):
	Trace[1967277509]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (14:52:35.692)
	Trace[1967277509]: [30.002592464s] [30.002592464s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1343823812]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1343823812]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (14:52:35.693)
	Trace[1343823812]: [30.00250289s] [30.00250289s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2019019817]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30004ms):
	Trace[2019019817]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (14:52:35.694)
	Trace[2019019817]: [30.004408468s] [30.004408468s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d21d3749d95316043a2048ebc2e089fa0f11d073272994afdc259f70b2a8b4b2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40875 - 61725 "HINFO IN 7277428254337144252.7281013876815056028. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014879071s
	
	
	==> coredns [d9bf8cef6e9551ba044bfa75d53bebdabf94a544fb35bcba8ae9dda955c97297] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52958 - 12430 "HINFO IN 2501253073000439982.8063739159986489070. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007070061s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1427080852]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1427080852]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:52:35.691)
	Trace[1427080852]: [30.002092041s] [30.002092041s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1959333545]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.691) (total time: 30002ms):
	Trace[1959333545]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:52:35.692)
	Trace[1959333545]: [30.002031471s] [30.002031471s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[910229496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Apr-2025 14:52:05.690) (total time: 30001ms):
	Trace[910229496]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:52:35.691)
	Trace[910229496]: [30.001488485s] [30.001488485s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-290859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-290859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=ha-290859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_29_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-290859
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:58:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:58:35 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:58:35 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:58:35 +0000   Mon, 14 Apr 2025 14:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:58:35 +0000   Mon, 14 Apr 2025 14:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-290859
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0538f5775f954b3bbf6bc94e8eb6c49a
	  System UUID:                0538f577-5f95-4b3b-bf6b-c94e8eb6c49a
	  Boot ID:                    3149ba3d-ea7c-4ba6-9ae8-cd7b558f527e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-t6bgg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-668d6bf9bc-qnl6q             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 coredns-668d6bf9bc-wbn4p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-ha-290859                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kindnet-hm99t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      29m
	  kube-system                 kube-apiserver-ha-290859             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-290859    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-cg945                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-290859             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-290859                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8s                     kube-proxy       
	  Normal   Starting                 6m40s                  kube-proxy       
	  Normal   Starting                 29m                    kube-proxy       
	  Normal   Starting                 29m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  29m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  29m                    kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29m                    kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     29m                    kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29m                    node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Normal   NodeReady                29m                    kubelet          Node ha-290859 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    6m58s (x8 over 6m58s)  kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  6m58s (x8 over 6m58s)  kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal   Starting                 6m58s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     6m58s (x7 over 6m58s)  kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m45s                  node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Warning  Rebooted                 6m44s                  kubelet          Node ha-290859 has been rebooted, boot id: 506c18f2-7f12-4001-8285-917ecaddf63d
	  Normal   Starting                 25s                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  25s (x8 over 25s)      kubelet          Node ha-290859 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    25s (x8 over 25s)      kubelet          Node ha-290859 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     25s (x7 over 25s)      kubelet          Node ha-290859 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  25s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12s                    node-controller  Node ha-290859 event: Registered Node ha-290859 in Controller
	  Warning  Rebooted                 11s                    kubelet          Node ha-290859 has been rebooted, boot id: 3149ba3d-ea7c-4ba6-9ae8-cd7b558f527e
	
	
	==> dmesg <==
	[Apr14 14:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051683] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.843923] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.043399] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.575809] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.249693] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.062117] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057981] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +0.152569] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.141008] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +0.261928] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +1.252111] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +6.934966] kauditd_printk_skb: 197 callbacks suppressed
	[  +7.848241] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.343275] kauditd_printk_skb: 79 callbacks suppressed
	
	
	==> etcd [1b6ed0e1da7871e61eb9f4e788d0fd3e644a201fc7a97b9bc959f00771b6b0d3] <==
	{"level":"info","ts":"2025-04-14T14:58:28.550164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 switched to configuration voters=(18136004197972551064)"}
	{"level":"info","ts":"2025-04-14T14:58:28.550244Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","added-peer-id":"fbb007bab925a598","added-peer-peer-urls":["https://192.168.39.110:2380"]}
	{"level":"info","ts":"2025-04-14T14:58:28.550348Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:58:28.550386Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:58:28.553431Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T14:58:28.554363Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"fbb007bab925a598","initial-advertise-peer-urls":["https://192.168.39.110:2380"],"listen-peer-urls":["https://192.168.39.110:2380"],"advertise-client-urls":["https://192.168.39.110:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.110:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T14:58:28.554632Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T14:58:28.556303Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2025-04-14T14:58:28.556557Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2025-04-14T14:58:29.820530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 3"}
	{"level":"info","ts":"2025-04-14T14:58:29.820654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-04-14T14:58:29.820684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2025-04-14T14:58:29.820735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became candidate at term 4"}
	{"level":"info","ts":"2025-04-14T14:58:29.820781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgVoteResp from fbb007bab925a598 at term 4"}
	{"level":"info","ts":"2025-04-14T14:58:29.820813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 4"}
	{"level":"info","ts":"2025-04-14T14:58:29.820829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 4"}
	{"level":"info","ts":"2025-04-14T14:58:29.827830Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:ha-290859 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:58:29.828311Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:58:29.828439Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:58:29.835592Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:58:29.835639Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T14:58:29.837819Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:58:29.838595Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T14:58:29.840733Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:58:29.842067Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	
	
	==> etcd [6bb8bbfa1b317897b9bcc96ba49e7c68f83cc4409dd69a72b86f0448aa2519ea] <==
	{"level":"info","ts":"2025-04-14T14:51:55.652582Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","added-peer-id":"fbb007bab925a598","added-peer-peer-urls":["https://192.168.39.110:2380"]}
	{"level":"info","ts":"2025-04-14T14:51:55.652820Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:51:55.652875Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T14:51:55.657644Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:55.677815Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T14:51:55.678882Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"fbb007bab925a598","initial-advertise-peer-urls":["https://192.168.39.110:2380"],"listen-peer-urls":["https://192.168.39.110:2380"],"advertise-client-urls":["https://192.168.39.110:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.110:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T14:51:55.678927Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T14:51:55.679144Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2025-04-14T14:51:55.679165Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2025-04-14T14:51:56.795570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2025-04-14T14:51:56.795666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.795959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgVoteResp from fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.796217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.796240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2025-04-14T14:51:56.797919Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:ha-290859 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T14:51:56.798371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:51:56.798558Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T14:51:56.799556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T14:51:56.799592Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T14:51:56.800393Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:56.801226Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2025-04-14T14:51:56.800393Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T14:51:56.802399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:58:46 up 0 min,  0 users,  load average: 0.78, 0.20, 0.07
	Linux ha-290859 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [607041fc2f4edc17de3caec2d00a9f9b49a94ed154254da72ec094a0f148db36] <==
	I0414 14:55:26.465845       1 main.go:301] handling current node
	I0414 14:55:26.465927       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:26.465968       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:36.463752       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:36.463830       1 main.go:301] handling current node
	I0414 14:55:36.463853       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:36.463859       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:46.456585       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:46.457113       1 main.go:301] handling current node
	I0414 14:55:46.457561       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:46.459726       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:55:56.464186       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:55:56.464300       1 main.go:301] handling current node
	I0414 14:55:56.464332       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:55:56.464345       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:56:06.455081       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:56:06.455167       1 main.go:301] handling current node
	I0414 14:56:06.455204       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:56:06.455229       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:56:16.454747       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:56:16.454884       1 main.go:301] handling current node
	I0414 14:56:16.454938       1 main.go:297] Handling node with IPs: map[192.168.39.112:{}]
	I0414 14:56:16.455070       1 main.go:324] Node ha-290859-m03 has CIDR [10.244.1.0/24] 
	I0414 14:56:26.458705       1 main.go:297] Handling node with IPs: map[192.168.39.110:{}]
	I0414 14:56:26.458788       1 main.go:301] handling current node
	
	
	==> kindnet [cd2446ef37b3a94be1a9c4475f0ac47563d2eed5fc9e82860e886726a07d3c9d] <==
	I0414 14:58:37.526033       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0414 14:58:37.527189       1 main.go:139] hostIP = 192.168.39.110
	podIP = 192.168.39.110
	I0414 14:58:37.527598       1 main.go:148] setting mtu 1500 for CNI 
	I0414 14:58:37.527711       1 main.go:178] kindnetd IP family: "ipv4"
	I0414 14:58:37.527744       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0414 14:58:38.223658       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	add table inet kindnet-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	
	
	==> kube-apiserver [00b109770be1cb3d772b7d440ccc36c098a8627e8280f195c263a0a87a6e0c07] <==
	I0414 14:51:57.932933       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0414 14:51:58.014528       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0414 14:51:58.014629       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0414 14:51:58.014535       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 14:51:58.023891       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0414 14:51:58.024459       1 shared_informer.go:320] Caches are synced for configmaps
	I0414 14:51:58.024473       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0414 14:51:58.024547       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0414 14:51:58.025376       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0414 14:51:58.035556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 14:51:58.035771       1 aggregator.go:171] initial CRD sync complete...
	I0414 14:51:58.035828       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:51:58.035845       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:51:58.035857       1 cache.go:39] Caches are synced for autoregister controller
	I0414 14:51:58.036008       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 14:51:58.036120       1 policy_source.go:240] refreshing policies
	I0414 14:51:58.097914       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0414 14:51:58.101123       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:51:58.918987       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:51:59.963976       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:52:04.263824       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0414 14:52:04.306348       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 14:52:04.363470       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:52:04.453440       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:52:04.454453       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [75d56975ac1d142680be67b738e608f000b15ecea30d29756e95fbfc023351e7] <==
	I0414 14:58:31.161558       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0414 14:58:31.166542       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0414 14:58:31.166760       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0414 14:58:31.166885       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 14:58:31.166766       1 shared_informer.go:320] Caches are synced for configmaps
	I0414 14:58:31.168098       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0414 14:58:31.168353       1 policy_source.go:240] refreshing policies
	I0414 14:58:31.174023       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0414 14:58:31.174376       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0414 14:58:31.174659       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 14:58:31.174807       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0414 14:58:31.174925       1 aggregator.go:171] initial CRD sync complete...
	I0414 14:58:31.175038       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 14:58:31.175142       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 14:58:31.175163       1 cache.go:39] Caches are synced for autoregister controller
	E0414 14:58:31.178605       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0414 14:58:31.225658       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0414 14:58:31.254604       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 14:58:32.065758       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 14:58:33.154292       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 14:58:34.449060       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 14:58:35.626136       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0414 14:58:35.650634       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 14:58:35.651626       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 14:58:35.659333       1 controller.go:615] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [59850cb50c6ed661ecbce3c22fb5d7f8fedfa6f225202e9fa64ce1c162b05c27] <==
	I0414 14:58:34.385992       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 14:58:34.391374       1 shared_informer.go:320] Caches are synced for deployment
	I0414 14:58:34.393727       1 shared_informer.go:320] Caches are synced for stateful set
	I0414 14:58:34.393830       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0414 14:58:34.393961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="60.665µs"
	I0414 14:58:34.394044       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="35.031µs"
	I0414 14:58:34.394727       1 shared_informer.go:320] Caches are synced for attach detach
	I0414 14:58:34.396371       1 shared_informer.go:320] Caches are synced for job
	I0414 14:58:34.400137       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:58:34.400272       1 shared_informer.go:320] Caches are synced for endpoint
	I0414 14:58:35.327644       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:58:35.633988       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="21.215047ms"
	I0414 14:58:35.635033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="993.924µs"
	I0414 14:58:35.641186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="15.026126ms"
	I0414 14:58:35.642248       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="38.803µs"
	I0414 14:58:35.653586       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="9.20356ms"
	I0414 14:58:35.653977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.976µs"
	I0414 14:58:35.656497       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0414 14:58:36.696137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.968µs"
	I0414 14:58:36.755662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="233.482µs"
	I0414 14:58:36.794128       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="76.526µs"
	I0414 14:58:37.738039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="56.324µs"
	I0414 14:58:37.767286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="41.145µs"
	I0414 14:58:37.814567       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="7.208647ms"
	I0414 14:58:37.814749       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="38.08µs"
	
	
	==> kube-controller-manager [e8658abcccb8b10d531ad775050d96f3375e484efcbaba4d5509a7a22f3608a9] <==
	I0414 14:52:01.154050       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:52:01.197460       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:52:01.197682       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 14:52:01.197815       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 14:52:01.207566       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 14:52:02.153254       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859"
	I0414 14:52:04.272410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.559874ms"
	I0414 14:52:04.273686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="51.226µs"
	I0414 14:52:04.439056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.737014ms"
	I0414 14:52:04.439344       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="242.032µs"
	I0414 14:52:04.459376       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.444236ms"
	I0414 14:52:04.460062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="174.256µs"
	I0414 14:52:06.474796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.379µs"
	I0414 14:52:06.508895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.708µs"
	I0414 14:52:06.532239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="7.280916ms"
	I0414 14:52:06.532571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="115.282µs"
	I0414 14:52:38.517073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.719998ms"
	I0414 14:52:38.517449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="101.016µs"
	I0414 14:52:38.546449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.225146ms"
	I0414 14:52:38.546575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.763µs"
	I0414 14:56:15.487465       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:56:15.503080       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-290859-m03"
	I0414 14:56:15.536625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="25.061691ms"
	I0414 14:56:15.546233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.560251ms"
	I0414 14:56:15.546295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="27.858µs"
	
	
	==> kube-proxy [1c01d86a74294bbfd5f487ec85ffc0f35cc4b979ad90c940eea5a17a8e5f46fb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:52:05.724966       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:52:05.743076       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:52:05.743397       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:52:05.784686       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:52:05.784731       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:52:05.784755       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:52:05.786929       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:52:05.787617       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:52:05.787645       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:52:05.789983       1 config.go:199] "Starting service config controller"
	I0414 14:52:05.790536       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:52:05.791108       1 config.go:329] "Starting node config controller"
	I0414 14:52:05.791131       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:52:05.794555       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:52:05.796335       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:52:05.891275       1 shared_informer.go:320] Caches are synced for service config
	I0414 14:52:05.891550       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:52:05.901825       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [98b44c2cfa7a4b06ab38737bc7dc36f53b22ded77e9c27f9809dbb9f9fe5c324] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:58:37.343608       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:58:37.363601       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0414 14:58:37.363880       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:58:37.395999       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:58:37.396053       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:58:37.396107       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:58:37.399019       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:58:37.400071       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:58:37.400095       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:58:37.403113       1 config.go:199] "Starting service config controller"
	I0414 14:58:37.403549       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:58:37.403822       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:58:37.403846       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:58:37.408618       1 config.go:329] "Starting node config controller"
	I0414 14:58:37.408781       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:58:37.504286       1 shared_informer.go:320] Caches are synced for service config
	I0414 14:58:37.504305       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 14:58:37.508903       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [29445064369e58250458efcfeed9a28e6da75ce4bcb6f15c9e58844eb1ba811e] <==
	I0414 14:51:55.842470       1 serving.go:386] Generated self-signed cert in-memory
	W0414 14:51:57.981716       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 14:51:57.981805       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 14:51:57.981829       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 14:51:57.981840       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 14:51:58.035351       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 14:51:58.035404       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:51:58.038565       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 14:51:58.038986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 14:51:58.039147       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 14:51:58.039434       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 14:51:58.140699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d0953cb083c9a1cacb362da16b56221931bc958f9a4c85030f73541bc3733481] <==
	I0414 14:58:29.559160       1 serving.go:386] Generated self-signed cert in-memory
	W0414 14:58:31.109656       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 14:58:31.109739       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 14:58:31.109762       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 14:58:31.109773       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 14:58:31.170553       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 14:58:31.170679       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:58:31.179806       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 14:58:31.180175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 14:58:31.180573       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 14:58:31.180960       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 14:58:31.281204       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:58:35 ha-290859 kubelet[889]: I0414 14:58:35.520677     889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bd6869b-0b23-4901-b9fa-02d62196a4f0-xtables-lock\") pod \"kube-proxy-cg945\" (UID: \"4bd6869b-0b23-4901-b9fa-02d62196a4f0\") " pod="kube-system/kube-proxy-cg945"
	Apr 14 14:58:35 ha-290859 kubelet[889]: I0414 14:58:35.521422     889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b3479bb3-d98e-42a9-bf3a-a6d20c52de81-cni-cfg\") pod \"kindnet-hm99t\" (UID: \"b3479bb3-d98e-42a9-bf3a-a6d20c52de81\") " pod="kube-system/kindnet-hm99t"
	Apr 14 14:58:35 ha-290859 kubelet[889]: I0414 14:58:35.522406     889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3479bb3-d98e-42a9-bf3a-a6d20c52de81-xtables-lock\") pod \"kindnet-hm99t\" (UID: \"b3479bb3-d98e-42a9-bf3a-a6d20c52de81\") " pod="kube-system/kindnet-hm99t"
	Apr 14 14:58:35 ha-290859 kubelet[889]: I0414 14:58:35.534403     889 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Apr 14 14:58:35 ha-290859 kubelet[889]: E0414 14:58:35.970077     889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.32.2,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPa
th:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldcxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-proxy-cg945_kube-system(4bd6869b-0b23-4901-b9fa-02d62196a4f0): CreateContainerConfigError: services have not yet been read at least once, cannot constr
uct envvars" logger="UnhandledError"
	Apr 14 14:58:35 ha-290859 kubelet[889]: E0414 14:58:35.971282     889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-cg945" podUID="4bd6869b-0b23-4901-b9fa-02d62196a4f0"
	Apr 14 14:58:36 ha-290859 kubelet[889]: E0414 14:58:36.053280     889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnm4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe
:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(a98bb55f-5a73-4436-82eb-ae7534928039): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Apr 14 14:58:36 ha-290859 kubelet[889]: E0414 14:58:36.054904     889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="a98bb55f-5a73-4436-82eb-ae7534928039"
	Apr 14 14:58:36 ha-290859 kubelet[889]: E0414 14:58:36.083686     889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:busybox,Image:gcr.io/k8s-minikube/busybox:1.28,Command:[sleep 3600],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4zwzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,S
tdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod busybox-58667487b6-t6bgg_default(bd39f57c-bcb5-4d77-b171-6d4d2f237e54): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Apr 14 14:58:36 ha-290859 kubelet[889]: E0414 14:58:36.085439     889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-58667487b6-t6bgg" podUID="bd39f57c-bcb5-4d77-b171-6d4d2f237e54"
	Apr 14 14:58:36 ha-290859 kubelet[889]: E0414 14:58:36.116537     889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sggjh,ReadOnly:true,MountPath:/var/run/secrets/kuberne
tes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAs
Group:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-668d6bf9bc-wbn4p_kube-system(5c2a6c8d-60f5-466d-8f59-f43a26cf06c4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Apr 14 14:58:36 ha-290859 kubelet[889]: E0414 14:58:36.119569     889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-668d6bf9bc-wbn4p" podUID="5c2a6c8d-60f5-466d-8f59-f43a26cf06c4"
	Apr 14 14:58:36 ha-290859 kubelet[889]: E0414 14:58:36.149829     889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250214-acbabc1a,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount
{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2rxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce
:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kindnet-hm99t_kube-system(b3479bb3-d98e-42a9-bf3a-a6d20c52de81): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Apr 14 14:58:36 ha-290859 kubelet[889]: E0414 14:58:36.151385     889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-hm99t" podUID="b3479bb3-d98e-42a9-bf3a-a6d20c52de81"
	Apr 14 14:58:36 ha-290859 kubelet[889]: E0414 14:58:36.154834     889 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9lng,ReadOnly:true,MountPath:/var/run/secrets/kuberne
tes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAs
Group:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-668d6bf9bc-qnl6q_kube-system(a590080d-c4b1-4697-9849-ae6130e483a3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Apr 14 14:58:36 ha-290859 kubelet[889]: E0414 14:58:36.156234     889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-668d6bf9bc-qnl6q" podUID="a590080d-c4b1-4697-9849-ae6130e483a3"
	Apr 14 14:58:36 ha-290859 kubelet[889]: I0414 14:58:36.667871     889 scope.go:117] "RemoveContainer" containerID="d9bf8cef6e9551ba044bfa75d53bebdabf94a544fb35bcba8ae9dda955c97297"
	Apr 14 14:58:36 ha-290859 kubelet[889]: I0414 14:58:36.670434     889 scope.go:117] "RemoveContainer" containerID="1c01d86a74294bbfd5f487ec85ffc0f35cc4b979ad90c940eea5a17a8e5f46fb"
	Apr 14 14:58:36 ha-290859 kubelet[889]: I0414 14:58:36.677882     889 scope.go:117] "RemoveContainer" containerID="6def8b5e81c3c293839e823e7db25b60e0f88e530e87f93ad6439e1ef8967337"
	Apr 14 14:58:36 ha-290859 kubelet[889]: I0414 14:58:36.687343     889 scope.go:117] "RemoveContainer" containerID="607041fc2f4edc17de3caec2d00a9f9b49a94ed154254da72ec094a0f148db36"
	Apr 14 14:58:36 ha-290859 kubelet[889]: I0414 14:58:36.692541     889 scope.go:117] "RemoveContainer" containerID="c3c2f4d5fe419392ff3850394da92847c7bcfe369f4d0eddffd38c2a59b41025"
	Apr 14 14:58:36 ha-290859 kubelet[889]: I0414 14:58:36.706561     889 scope.go:117] "RemoveContainer" containerID="ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26"
	Apr 14 14:58:37 ha-290859 kubelet[889]: I0414 14:58:37.715411     889 scope.go:117] "RemoveContainer" containerID="ea9e85492cab11d04c4610b349d14e65f48b4f7ef9b1bf510cce3f98d9f23a26"
	Apr 14 14:58:37 ha-290859 kubelet[889]: I0414 14:58:37.716126     889 scope.go:117] "RemoveContainer" containerID="a53ff9bb85d8f802b5183370fb36599bcf001a6d3d9d84fc3b698195021d436e"
	Apr 14 14:58:37 ha-290859 kubelet[889]: E0414 14:58:37.716362     889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a98bb55f-5a73-4436-82eb-ae7534928039)\"" pod="kube-system/storage-provisioner" podUID="a98bb55f-5a73-4436-82eb-ae7534928039"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-290859 -n ha-290859
helpers_test.go:261: (dbg) Run:  kubectl --context ha-290859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-58667487b6-bfghg busybox-58667487b6-q9jvx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-290859 describe pod busybox-58667487b6-bfghg busybox-58667487b6-q9jvx
helpers_test.go:282: (dbg) kubectl --context ha-290859 describe pod busybox-58667487b6-bfghg busybox-58667487b6-q9jvx:

                                                
                                                
-- stdout --
	Name:             busybox-58667487b6-bfghg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6l76h (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-6l76h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  2m31s  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  15s    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-58667487b6-q9jvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=58667487b6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-58667487b6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fklg7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fklg7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  6m45s (x2 over 6m48s)  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  15s                    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  18m (x3 over 28m)      default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  16m (x2 over 16m)      default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  10m (x3 over 15m)      default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  9m27s (x2 over 9m38s)  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (47.16s)

                                                
                                    

Test pass (272/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.81
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.2/json-events 11.46
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.14
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.78
22 TestOffline 79.09
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 212.92
29 TestAddons/serial/Volcano 41.77
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.51
35 TestAddons/parallel/Registry 15.61
36 TestAddons/parallel/Ingress 20.65
37 TestAddons/parallel/InspektorGadget 11.81
38 TestAddons/parallel/MetricsServer 5.83
40 TestAddons/parallel/CSI 43.3
41 TestAddons/parallel/Headlamp 20.15
42 TestAddons/parallel/CloudSpanner 5.89
43 TestAddons/parallel/LocalPath 55.68
44 TestAddons/parallel/NvidiaDevicePlugin 6.63
45 TestAddons/parallel/Yakd 12.07
47 TestAddons/StoppedEnableDisable 91.26
48 TestCertOptions 92.04
49 TestCertExpiration 331.49
51 TestForceSystemdFlag 99.32
52 TestForceSystemdEnv 45.68
54 TestKVMDriverInstallOrUpdate 4.12
58 TestErrorSpam/setup 45.26
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.72
61 TestErrorSpam/pause 1.49
62 TestErrorSpam/unpause 1.67
63 TestErrorSpam/stop 5.05
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 56.37
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 41.54
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
75 TestFunctional/serial/CacheCmd/cache/add_local 1.95
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 44.5
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.34
86 TestFunctional/serial/LogsFileCmd 1.34
87 TestFunctional/serial/InvalidService 4.3
89 TestFunctional/parallel/ConfigCmd 0.34
90 TestFunctional/parallel/DashboardCmd 30.84
91 TestFunctional/parallel/DryRun 0.3
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.71
97 TestFunctional/parallel/ServiceCmdConnect 10.51
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 42.99
101 TestFunctional/parallel/SSHCmd 0.43
102 TestFunctional/parallel/CpCmd 1.31
103 TestFunctional/parallel/MySQL 27.41
104 TestFunctional/parallel/FileSync 0.23
105 TestFunctional/parallel/CertSync 1.42
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
113 TestFunctional/parallel/License 0.59
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
118 TestFunctional/parallel/ImageCommands/ImageBuild 5.11
119 TestFunctional/parallel/ImageCommands/Setup 1.71
120 TestFunctional/parallel/Version/short 0.06
121 TestFunctional/parallel/Version/components 0.51
122 TestFunctional/parallel/ServiceCmd/DeployApp 11.15
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.41
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.33
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.92
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
140 TestFunctional/parallel/ProfileCmd/profile_list 0.34
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
142 TestFunctional/parallel/MountCmd/any-port 11.67
143 TestFunctional/parallel/ServiceCmd/List 0.26
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
146 TestFunctional/parallel/ServiceCmd/Format 0.31
147 TestFunctional/parallel/ServiceCmd/URL 0.31
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
151 TestFunctional/parallel/MountCmd/specific-port 1.61
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/NodeLabels 0.06
179 TestJSONOutput/start/Command 52.69
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.69
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.6
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.47
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.2
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 89.39
211 TestMountStart/serial/StartWithMountFirst 28.32
212 TestMountStart/serial/VerifyMountFirst 0.39
213 TestMountStart/serial/StartWithMountSecond 28.74
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.72
216 TestMountStart/serial/VerifyMountPostDelete 0.38
217 TestMountStart/serial/Stop 2.28
218 TestMountStart/serial/RestartStopped 23.75
219 TestMountStart/serial/VerifyMountPostStop 0.39
222 TestMultiNode/serial/FreshStart2Nodes 115.14
223 TestMultiNode/serial/DeployApp2Nodes 6.12
224 TestMultiNode/serial/PingHostFrom2Pods 0.78
225 TestMultiNode/serial/AddNode 51.79
226 TestMultiNode/serial/MultiNodeLabels 0.07
227 TestMultiNode/serial/ProfileList 0.59
228 TestMultiNode/serial/CopyFile 7.35
229 TestMultiNode/serial/StopNode 2.16
230 TestMultiNode/serial/StartAfterStop 32.65
231 TestMultiNode/serial/RestartKeepsNodes 325.23
232 TestMultiNode/serial/DeleteNode 2.19
233 TestMultiNode/serial/StopMultiNode 181.65
234 TestMultiNode/serial/RestartMultiNode 135.9
235 TestMultiNode/serial/ValidateNameConflict 46.4
240 TestPreload 227.85
242 TestScheduledStopUnix 116.86
246 TestRunningBinaryUpgrade 160.77
248 TestKubernetesUpgrade 173.03
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 93.03
260 TestNetworkPlugins/group/false 3.23
264 TestNoKubernetes/serial/StartWithStopK8s 78.58
265 TestNoKubernetes/serial/Start 72.75
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
267 TestNoKubernetes/serial/ProfileList 2.03
268 TestNoKubernetes/serial/Stop 1.49
269 TestNoKubernetes/serial/StartNoArgs 22.18
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
271 TestStoppedBinaryUpgrade/Setup 2.3
272 TestStoppedBinaryUpgrade/Upgrade 166.17
281 TestPause/serial/Start 66.88
282 TestNetworkPlugins/group/auto/Start 62.12
283 TestNetworkPlugins/group/kindnet/Start 93.29
284 TestPause/serial/SecondStartNoReconfiguration 85.09
285 TestNetworkPlugins/group/auto/KubeletFlags 0.24
286 TestNetworkPlugins/group/auto/NetCatPod 12.26
287 TestNetworkPlugins/group/auto/DNS 0.21
288 TestNetworkPlugins/group/auto/Localhost 0.15
289 TestNetworkPlugins/group/auto/HairPin 0.5
290 TestNetworkPlugins/group/calico/Start 84.89
291 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
292 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
293 TestNetworkPlugins/group/custom-flannel/Start 79.86
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
295 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
296 TestNetworkPlugins/group/kindnet/DNS 0.12
297 TestNetworkPlugins/group/kindnet/Localhost 0.11
298 TestNetworkPlugins/group/kindnet/HairPin 0.11
299 TestPause/serial/Pause 0.64
300 TestPause/serial/VerifyStatus 0.24
301 TestPause/serial/Unpause 0.61
302 TestPause/serial/PauseAgain 0.79
303 TestPause/serial/DeletePaused 1.07
304 TestPause/serial/VerifyDeletedResources 0.53
305 TestNetworkPlugins/group/enable-default-cni/Start 85.16
306 TestNetworkPlugins/group/flannel/Start 114.29
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
309 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.25
310 TestNetworkPlugins/group/calico/KubeletFlags 0.25
311 TestNetworkPlugins/group/calico/NetCatPod 11.26
312 TestNetworkPlugins/group/custom-flannel/DNS 0.16
313 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
314 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
315 TestNetworkPlugins/group/calico/DNS 0.21
316 TestNetworkPlugins/group/calico/Localhost 0.17
317 TestNetworkPlugins/group/calico/HairPin 0.15
318 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
319 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
320 TestNetworkPlugins/group/bridge/Start 62.87
322 TestStartStop/group/old-k8s-version/serial/FirstStart 188.91
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
327 TestStartStop/group/no-preload/serial/FirstStart 104.99
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
330 TestNetworkPlugins/group/flannel/NetCatPod 10.2
331 TestNetworkPlugins/group/flannel/DNS 0.19
332 TestNetworkPlugins/group/flannel/Localhost 0.13
333 TestNetworkPlugins/group/flannel/HairPin 0.13
334 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
335 TestNetworkPlugins/group/bridge/NetCatPod 10.34
337 TestStartStop/group/embed-certs/serial/FirstStart 69.05
338 TestNetworkPlugins/group/bridge/DNS 0.14
339 TestNetworkPlugins/group/bridge/Localhost 0.11
340 TestNetworkPlugins/group/bridge/HairPin 0.11
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.23
343 TestStartStop/group/no-preload/serial/DeployApp 9.49
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.35
345 TestStartStop/group/no-preload/serial/Stop 90.82
346 TestStartStop/group/embed-certs/serial/DeployApp 10.3
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
348 TestStartStop/group/embed-certs/serial/Stop 91.32
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.18
352 TestStartStop/group/old-k8s-version/serial/DeployApp 9.45
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
354 TestStartStop/group/old-k8s-version/serial/Stop 91.5
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
356 TestStartStop/group/no-preload/serial/SecondStart 319.42
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
358 TestStartStop/group/embed-certs/serial/SecondStart 318.37
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 305.43
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
362 TestStartStop/group/old-k8s-version/serial/SecondStart 161.97
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
366 TestStartStop/group/old-k8s-version/serial/Pause 2.47
368 TestStartStop/group/newest-cni/serial/FirstStart 45.33
369 TestStartStop/group/newest-cni/serial/DeployApp 0
370 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
371 TestStartStop/group/newest-cni/serial/Stop 2.32
372 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
373 TestStartStop/group/newest-cni/serial/SecondStart 37.38
374 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
375 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
376 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
377 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
380 TestStartStop/group/no-preload/serial/Pause 3.11
381 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/newest-cni/serial/Pause 2.96
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
386 TestStartStop/group/embed-certs/serial/Pause 2.49
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
389 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.56
x
+
TestDownloadOnly/v1.20.0/json-events (22.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-422954 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-422954 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (22.812539814s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0414 14:17:12.262448 1203639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0414 14:17:12.262553 1203639 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-422954
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-422954: exit status 85 (60.887526ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-422954 | jenkins | v1.35.0 | 14 Apr 25 14:16 UTC |          |
	|         | -p download-only-422954        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:16:49
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:16:49.491825 1203651 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:16:49.492119 1203651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:16:49.492130 1203651 out.go:358] Setting ErrFile to fd 2...
	I0414 14:16:49.492134 1203651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:16:49.492363 1203651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	W0414 14:16:49.492488 1203651 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20512-1196368/.minikube/config/config.json: open /home/jenkins/minikube-integration/20512-1196368/.minikube/config/config.json: no such file or directory
	I0414 14:16:49.493061 1203651 out.go:352] Setting JSON to true
	I0414 14:16:49.494164 1203651 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":21552,"bootTime":1744618657,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:16:49.494237 1203651 start.go:139] virtualization: kvm guest
	I0414 14:16:49.496612 1203651 out.go:97] [download-only-422954] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0414 14:16:49.496752 1203651 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball: no such file or directory
	I0414 14:16:49.496815 1203651 notify.go:220] Checking for updates...
	I0414 14:16:49.497912 1203651 out.go:169] MINIKUBE_LOCATION=20512
	I0414 14:16:49.499071 1203651 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:16:49.500226 1203651 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:16:49.501243 1203651 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:16:49.502176 1203651 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 14:16:49.503890 1203651 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 14:16:49.504096 1203651 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:16:49.536714 1203651 out.go:97] Using the kvm2 driver based on user configuration
	I0414 14:16:49.536754 1203651 start.go:297] selected driver: kvm2
	I0414 14:16:49.536762 1203651 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:16:49.537080 1203651 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:16:49.537152 1203651 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:16:49.553183 1203651 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:16:49.553230 1203651 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:16:49.553787 1203651 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0414 14:16:49.553953 1203651 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 14:16:49.553999 1203651 cni.go:84] Creating CNI manager for ""
	I0414 14:16:49.554053 1203651 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0414 14:16:49.554061 1203651 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 14:16:49.554131 1203651 start.go:340] cluster config:
	{Name:download-only-422954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-422954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:16:49.554317 1203651 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:16:49.555965 1203651 out.go:97] Downloading VM boot image ...
	I0414 14:16:49.556006 1203651 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:16:58.945429 1203651 out.go:97] Starting "download-only-422954" primary control-plane node in "download-only-422954" cluster
	I0414 14:16:58.945471 1203651 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0414 14:16:59.043298 1203651 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0414 14:16:59.043333 1203651 cache.go:56] Caching tarball of preloaded images
	I0414 14:16:59.043545 1203651 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0414 14:16:59.045173 1203651 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0414 14:16:59.045213 1203651 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0414 14:16:59.146975 1203651 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-422954 host does not exist
	  To start a cluster, run: "minikube start -p download-only-422954"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-422954
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (11.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-691496 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-691496 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (11.455606671s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (11.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0414 14:17:24.054683 1203639 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 14:17:24.054744 1203639 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-691496
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-691496: exit status 85 (63.652871ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-422954 | jenkins | v1.35.0 | 14 Apr 25 14:16 UTC |                     |
	|         | -p download-only-422954        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	| delete  | -p download-only-422954        | download-only-422954 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	| start   | -o=json --download-only        | download-only-691496 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | -p download-only-691496        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:17:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:17:12.640787 1203903 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:17:12.641076 1203903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:17:12.641087 1203903 out.go:358] Setting ErrFile to fd 2...
	I0414 14:17:12.641093 1203903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:17:12.641277 1203903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:17:12.641862 1203903 out.go:352] Setting JSON to true
	I0414 14:17:12.642798 1203903 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":21576,"bootTime":1744618657,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:17:12.642934 1203903 start.go:139] virtualization: kvm guest
	I0414 14:17:12.644806 1203903 out.go:97] [download-only-691496] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:17:12.644978 1203903 notify.go:220] Checking for updates...
	I0414 14:17:12.646248 1203903 out.go:169] MINIKUBE_LOCATION=20512
	I0414 14:17:12.647414 1203903 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:17:12.648574 1203903 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:17:12.649549 1203903 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:17:12.650544 1203903 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 14:17:12.652374 1203903 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 14:17:12.652634 1203903 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:17:12.684774 1203903 out.go:97] Using the kvm2 driver based on user configuration
	I0414 14:17:12.684805 1203903 start.go:297] selected driver: kvm2
	I0414 14:17:12.684811 1203903 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:17:12.685145 1203903 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:17:12.685244 1203903 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1196368/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:17:12.700810 1203903 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:17:12.700857 1203903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:17:12.701381 1203903 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0414 14:17:12.701538 1203903 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 14:17:12.701572 1203903 cni.go:84] Creating CNI manager for ""
	I0414 14:17:12.701617 1203903 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0414 14:17:12.701625 1203903 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 14:17:12.701670 1203903 start.go:340] cluster config:
	{Name:download-only-691496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-691496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:17:12.701782 1203903 iso.go:125] acquiring lock: {Name:mkbf783c803effe6c4b8297ac6b84dcca9e29413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:17:12.703240 1203903 out.go:97] Starting "download-only-691496" primary control-plane node in "download-only-691496" cluster
	I0414 14:17:12.703265 1203903 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:17:12.803867 1203903 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:17:12.803926 1203903 cache.go:56] Caching tarball of preloaded images
	I0414 14:17:12.804079 1203903 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0414 14:17:12.805715 1203903 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0414 14:17:12.805748 1203903 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0414 14:17:12.903023 1203903 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:17ec4d97c92604221650726c3857ee2a -> /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0414 14:17:22.289842 1203903 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0414 14:17:22.289953 1203903 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20512-1196368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-691496 host does not exist
	  To start a cluster, run: "minikube start -p download-only-691496"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-691496
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
I0414 14:17:24.920236 1203639 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-678652 --alsologtostderr --binary-mirror http://127.0.0.1:38863 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-678652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-678652
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    
x
+
TestOffline (79.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-418592 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-418592 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m17.979541559s)
helpers_test.go:175: Cleaning up "offline-containerd-418592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-418592
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-418592: (1.110142404s)
--- PASS: TestOffline (79.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-537199
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-537199: exit status 85 (56.742166ms)

                                                
                                                
-- stdout --
	* Profile "addons-537199" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-537199"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-537199
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-537199: exit status 85 (56.472862ms)

                                                
                                                
-- stdout --
	* Profile "addons-537199" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-537199"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (212.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-537199 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-537199 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m32.918567192s)
--- PASS: TestAddons/Setup (212.92s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.77s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 21.523147ms
addons_test.go:823: volcano-controller stabilized in 21.760211ms
addons_test.go:807: volcano-scheduler stabilized in 21.800392ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-6gqwc" [8814873d-8516-42a9-8bab-7447705a34d3] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003940966s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-q75m7" [a600036d-7994-4449-a13c-8d895e589645] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003717066s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-bx2nz" [4c70490a-9af0-4422-811c-1536332b23bd] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003614476s
addons_test.go:842: (dbg) Run:  kubectl --context addons-537199 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-537199 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-537199 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [86e285d8-1c2b-4eac-806d-90523126e71a] Pending
helpers_test.go:344: "test-job-nginx-0" [86e285d8-1c2b-4eac-806d-90523126e71a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [86e285d8-1c2b-4eac-806d-90523126e71a] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003724605s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-537199 addons disable volcano --alsologtostderr -v=1: (11.386266799s)
--- PASS: TestAddons/serial/Volcano (41.77s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-537199 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-537199 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-537199 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-537199 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [68d45435-bf38-4f5f-aa8f-ba470f7eb213] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [68d45435-bf38-4f5f-aa8f-ba470f7eb213] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003877877s
addons_test.go:633: (dbg) Run:  kubectl --context addons-537199 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-537199 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-537199 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.860262ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-d76hk" [c9517e51-e485-4790-95c5-3096b91fe3d1] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003146837s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r2bn8" [a17f17b1-2cfa-49aa-979c-bcfb3f548ab8] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003591862s
addons_test.go:331: (dbg) Run:  kubectl --context addons-537199 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-537199 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-537199 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.867261273s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 ip
2025/04/14 14:22:14 [DEBUG] GET http://192.168.39.63:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.61s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-537199 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-537199 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-537199 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a164dda2-31ca-434e-a00f-dfacd1b17d4c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a164dda2-31ca-434e-a00f-dfacd1b17d4c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004009186s
I0414 14:22:41.713642 1203639 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-537199 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.63
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-537199 addons disable ingress-dns --alsologtostderr -v=1: (1.44280695s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-537199 addons disable ingress --alsologtostderr -v=1: (7.946588969s)
--- PASS: TestAddons/parallel/Ingress (20.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-sp2xz" [6ad967ce-6ce4-43b3-91ca-7e8604d2d201] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004334215s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-537199 addons disable inspektor-gadget --alsologtostderr -v=1: (5.808960838s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.967499ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-sgrlc" [834252b8-19af-48a0-9af6-605064c9e5d0] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004231692s
addons_test.go:402: (dbg) Run:  kubectl --context addons-537199 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0414 14:22:11.499613 1203639 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0414 14:22:11.511431 1203639 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0414 14:22:11.511459 1203639 kapi.go:107] duration metric: took 11.867159ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 11.879753ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-537199 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-537199 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [63c81e11-2928-4f9c-a5aa-ffd2aed29b6f] Pending
helpers_test.go:344: "task-pv-pod" [63c81e11-2928-4f9c-a5aa-ffd2aed29b6f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [63c81e11-2928-4f9c-a5aa-ffd2aed29b6f] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004826553s
addons_test.go:511: (dbg) Run:  kubectl --context addons-537199 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-537199 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-537199 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-537199 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-537199 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-537199 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-537199 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [462b239c-a8bd-4bff-9ade-24cb22e2ded8] Pending
helpers_test.go:344: "task-pv-pod-restore" [462b239c-a8bd-4bff-9ade-24cb22e2ded8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [462b239c-a8bd-4bff-9ade-24cb22e2ded8] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003344128s
addons_test.go:553: (dbg) Run:  kubectl --context addons-537199 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-537199 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-537199 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-537199 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.793086065s)
--- PASS: TestAddons/parallel/CSI (43.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-537199 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-537199 --alsologtostderr -v=1: (1.448605478s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-jvrmc" [ec151aa5-3ca0-4ece-8d0d-1e343b420ac4] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-jvrmc" [ec151aa5-3ca0-4ece-8d0d-1e343b420ac4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-jvrmc" [ec151aa5-3ca0-4ece-8d0d-1e343b420ac4] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005019808s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-537199 addons disable headlamp --alsologtostderr -v=1: (5.694888727s)
--- PASS: TestAddons/parallel/Headlamp (20.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-tb2jm" [5f95fc24-6395-432e-9f23-53596e9d8f21] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.026338395s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.89s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.68s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-537199 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-537199 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537199 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [08892387-5fe0-4b91-bbe9-4f72ac60292f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [08892387-5fe0-4b91-bbe9-4f72ac60292f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [08892387-5fe0-4b91-bbe9-4f72ac60292f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005358076s
addons_test.go:906: (dbg) Run:  kubectl --context addons-537199 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 ssh "cat /opt/local-path-provisioner/pvc-3274e1d6-244f-465d-bd50-892f00712b6c_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-537199 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-537199 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-537199 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.806680367s)
--- PASS: TestAddons/parallel/LocalPath (55.68s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fwwfs" [e0a13f63-22da-4afa-8983-25add36fe465] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004109622s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-mjvgt" [e179c69e-7243-4f9e-8ca0-b87253883565] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004138216s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-537199 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-537199 addons disable yakd --alsologtostderr -v=1: (6.062158345s)
--- PASS: TestAddons/parallel/Yakd (12.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-537199
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-537199: (1m30.971373858s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-537199
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-537199
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-537199
--- PASS: TestAddons/StoppedEnableDisable (91.26s)

                                                
                                    
x
+
TestCertOptions (92.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-434986 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-434986 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m30.135268586s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-434986 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-434986 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-434986 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-434986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-434986
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-434986: (1.418214578s)
--- PASS: TestCertOptions (92.04s)

                                                
                                    
x
+
TestCertExpiration (331.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-976000 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-976000 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m43.672534642s)
E0414 15:25:58.678788 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-976000 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-976000 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (46.159646467s)
helpers_test.go:175: Cleaning up "cert-expiration-976000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-976000
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-976000: (1.66192154s)
--- PASS: TestCertExpiration (331.49s)

                                                
                                    
x
+
TestForceSystemdFlag (99.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-588925 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-588925 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m38.091653026s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-588925 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-588925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-588925
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-588925: (1.020858985s)
--- PASS: TestForceSystemdFlag (99.32s)

                                                
                                    
x
+
TestForceSystemdEnv (45.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-620360 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-620360 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (44.419376333s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-620360 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-620360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-620360
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-620360: (1.065447758s)
--- PASS: TestForceSystemdEnv (45.68s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.12s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0414 15:23:51.681129 1203639 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 15:23:51.681315 1203639 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0414 15:23:51.715156 1203639 install.go:62] docker-machine-driver-kvm2: exit status 1
W0414 15:23:51.715401 1203639 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 15:23:51.715486 1203639 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4007573333/001/docker-machine-driver-kvm2
I0414 15:23:52.017416 1203639 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4007573333/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000523698 gz:0xc000523990 tar:0xc000523920 tar.bz2:0xc000523930 tar.gz:0xc000523940 tar.xz:0xc000523950 tar.zst:0xc000523980 tbz2:0xc000523930 tgz:0xc000523940 txz:0xc000523950 tzst:0xc000523980 xz:0xc000523998 zip:0xc0005239b0 zst:0xc0005239e0] Getters:map[file:0xc00058dab0 http:0xc001dd94f0 https:0xc001dd9540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 15:23:52.017483 1203639 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4007573333/001/docker-machine-driver-kvm2
I0414 15:23:54.017571 1203639 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 15:23:54.017677 1203639 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 15:23:54.048392 1203639 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0414 15:23:54.048424 1203639 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0414 15:23:54.048501 1203639 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 15:23:54.048533 1203639 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4007573333/002/docker-machine-driver-kvm2
I0414 15:23:54.082789 1203639 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4007573333/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000523698 gz:0xc000523990 tar:0xc000523920 tar.bz2:0xc000523930 tar.gz:0xc000523940 tar.xz:0xc000523950 tar.zst:0xc000523980 tbz2:0xc000523930 tgz:0xc000523940 txz:0xc000523950 tzst:0xc000523980 xz:0xc000523998 zip:0xc0005239b0 zst:0xc0005239e0] Getters:map[file:0xc0018c0210 http:0xc0006d65a0 https:0xc0006d65f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 15:23:54.082857 1203639 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4007573333/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.12s)

                                                
                                    
x
+
TestErrorSpam/setup (45.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-606379 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-606379 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-606379 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-606379 --driver=kvm2  --container-runtime=containerd: (45.258564415s)
--- PASS: TestErrorSpam/setup (45.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (5.05s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 stop: (1.303564529s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 stop: (1.69108334s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-606379 --log_dir /tmp/nospam-606379 stop: (2.055553626s)
--- PASS: TestErrorSpam/stop (5.05s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20512-1196368/.minikube/files/etc/test/nested/copy/1203639/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-905978 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0414 14:25:58.679628 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:25:58.686150 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:25:58.697584 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:25:58.719063 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:25:58.760501 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:25:58.842022 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:25:59.003591 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:25:59.325660 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:25:59.967771 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:26:01.249788 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:26:03.812170 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:26:08.933773 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-905978 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (56.373215134s)
--- PASS: TestFunctional/serial/StartWithProxy (56.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0414 14:26:18.360999 1203639 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-905978 --alsologtostderr -v=8
E0414 14:26:19.175819 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:26:39.657951 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-905978 --alsologtostderr -v=8: (41.539043842s)
functional_test.go:680: soft start took 41.53969728s for "functional-905978" cluster.
I0414 14:26:59.900470 1203639 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (41.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-905978 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-905978 cache add registry.k8s.io/pause:3.1: (1.114460824s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-905978 cache add registry.k8s.io/pause:3.3: (1.093875287s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-905978 cache add registry.k8s.io/pause:latest: (1.003162933s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-905978 /tmp/TestFunctionalserialCacheCmdcacheadd_local3549664697/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 cache add minikube-local-cache-test:functional-905978
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-905978 cache add minikube-local-cache-test:functional-905978: (1.630919028s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 cache delete minikube-local-cache-test:functional-905978
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-905978
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-905978 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.279406ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 kubectl -- --context functional-905978 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-905978 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-905978 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0414 14:27:20.619659 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-905978 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.503910593s)
functional_test.go:778: restart took 44.504111802s for "functional-905978" cluster.
I0414 14:27:51.920124 1203639 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (44.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-905978 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-905978 logs: (1.335688487s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 logs --file /tmp/TestFunctionalserialLogsFileCmd3938321256/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-905978 logs --file /tmp/TestFunctionalserialLogsFileCmd3938321256/001/logs.txt: (1.333974321s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-905978 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-905978
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-905978: exit status 115 (276.384527ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.67:32437 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-905978 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-905978 config get cpus: exit status 14 (52.820451ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-905978 config get cpus: exit status 14 (52.725775ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-905978 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-905978 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1212117: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-905978 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-905978 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (144.720643ms)

                                                
                                                
-- stdout --
	* [functional-905978] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:28:11.028418 1211654 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:11.028665 1211654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:11.028673 1211654 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:11.028678 1211654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:11.028877 1211654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:11.029480 1211654 out.go:352] Setting JSON to false
	I0414 14:28:11.030669 1211654 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22234,"bootTime":1744618657,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:11.030759 1211654 start.go:139] virtualization: kvm guest
	I0414 14:28:11.032142 1211654 out.go:177] * [functional-905978] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:11.033640 1211654 notify.go:220] Checking for updates...
	I0414 14:28:11.033681 1211654 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:11.034980 1211654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:11.036265 1211654 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:11.037649 1211654 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:11.039122 1211654 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:11.040342 1211654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:11.041844 1211654 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:28:11.042252 1211654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:11.042339 1211654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:11.059946 1211654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I0414 14:28:11.060370 1211654 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:11.060863 1211654 main.go:141] libmachine: Using API Version  1
	I0414 14:28:11.060886 1211654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:11.061256 1211654 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:11.061466 1211654 main.go:141] libmachine: (functional-905978) Calling .DriverName
	I0414 14:28:11.061745 1211654 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:11.062104 1211654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:11.062162 1211654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:11.078806 1211654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
	I0414 14:28:11.079381 1211654 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:11.079986 1211654 main.go:141] libmachine: Using API Version  1
	I0414 14:28:11.080020 1211654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:11.080397 1211654 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:11.080603 1211654 main.go:141] libmachine: (functional-905978) Calling .DriverName
	I0414 14:28:11.118732 1211654 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:28:11.119752 1211654 start.go:297] selected driver: kvm2
	I0414 14:28:11.119766 1211654 start.go:901] validating driver "kvm2" against &{Name:functional-905978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-905978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.67 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:11.119900 1211654 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:11.121810 1211654 out.go:201] 
	W0414 14:28:11.122823 1211654 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0414 14:28:11.123735 1211654 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-905978 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-905978 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-905978 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (162.017408ms)

                                                
                                                
-- stdout --
	* [functional-905978] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:28:11.344521 1211754 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:28:11.344653 1211754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:11.344666 1211754 out.go:358] Setting ErrFile to fd 2...
	I0414 14:28:11.344670 1211754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:28:11.344949 1211754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 14:28:11.345544 1211754 out.go:352] Setting JSON to false
	I0414 14:28:11.346731 1211754 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":22234,"bootTime":1744618657,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:28:11.346804 1211754 start.go:139] virtualization: kvm guest
	I0414 14:28:11.348617 1211754 out.go:177] * [functional-905978] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0414 14:28:11.350066 1211754 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:28:11.350062 1211754 notify.go:220] Checking for updates...
	I0414 14:28:11.352279 1211754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:28:11.353454 1211754 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 14:28:11.354684 1211754 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 14:28:11.355846 1211754 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:28:11.356805 1211754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:28:11.358374 1211754 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 14:28:11.359090 1211754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:11.359228 1211754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:11.375200 1211754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0414 14:28:11.375788 1211754 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:11.376408 1211754 main.go:141] libmachine: Using API Version  1
	I0414 14:28:11.376438 1211754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:11.376894 1211754 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:11.377102 1211754 main.go:141] libmachine: (functional-905978) Calling .DriverName
	I0414 14:28:11.377403 1211754 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:28:11.377847 1211754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 14:28:11.377897 1211754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:28:11.393774 1211754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44039
	I0414 14:28:11.394456 1211754 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:28:11.394978 1211754 main.go:141] libmachine: Using API Version  1
	I0414 14:28:11.395001 1211754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:28:11.395453 1211754 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:28:11.395666 1211754 main.go:141] libmachine: (functional-905978) Calling .DriverName
	I0414 14:28:11.437513 1211754 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0414 14:28:11.438794 1211754 start.go:297] selected driver: kvm2
	I0414 14:28:11.438813 1211754 start.go:901] validating driver "kvm2" against &{Name:functional-905978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-905978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.67 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:28:11.438972 1211754 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:28:11.441068 1211754 out.go:201] 
	W0414 14:28:11.442385 1211754 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0414 14:28:11.443456 1211754 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-905978 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-905978 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-p96qn" [b052f1db-cbf7-44d5-aff4-99b02f3afc9a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-p96qn" [b052f1db-cbf7-44d5-aff4-99b02f3afc9a] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.002919651s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.50.67:30224
functional_test.go:1692: http://192.168.50.67:30224: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-p96qn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.67:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.67:30224
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3d130cba-569e-4595-ab8f-b7212adf2ae9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00353795s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-905978 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-905978 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-905978 get pvc myclaim -o=json
I0414 14:28:05.604569 1203639 retry.go:31] will retry after 2.86543145s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:adc40a77-d3a3-43a8-bc35-ecbbd268ff25 ResourceVersion:714 Generation:0 CreationTimestamp:2025-04-14 14:28:05 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0015c1e70 VolumeMode:0xc0015c1e80 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-905978 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-905978 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c8464dd6-1e1d-47d7-89fa-093d8724b480] Pending
helpers_test.go:344: "sp-pod" [c8464dd6-1e1d-47d7-89fa-093d8724b480] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c8464dd6-1e1d-47d7-89fa-093d8724b480] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004094344s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-905978 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-905978 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-905978 delete -f testdata/storage-provisioner/pod.yaml: (1.239363127s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-905978 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4c735d32-4fb3-4c77-b443-0feb59a6e419] Pending
helpers_test.go:344: "sp-pod" [4c735d32-4fb3-4c77-b443-0feb59a6e419] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4c735d32-4fb3-4c77-b443-0feb59a6e419] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004225643s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-905978 exec sp-pod -- ls /tmp/mount
2025/04/14 14:28:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh -n functional-905978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 cp functional-905978:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3298661874/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh -n functional-905978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh -n functional-905978 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-905978 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-vpw2b" [cc87f519-2918-4035-96fa-551a75b7f492] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-vpw2b" [cc87f519-2918-4035-96fa-551a75b7f492] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.00405445s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-905978 exec mysql-58ccfd96bb-vpw2b -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-905978 exec mysql-58ccfd96bb-vpw2b -- mysql -ppassword -e "show databases;": exit status 1 (195.348018ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 14:28:34.767117 1203639 retry.go:31] will retry after 731.805432ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-905978 exec mysql-58ccfd96bb-vpw2b -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-905978 exec mysql-58ccfd96bb-vpw2b -- mysql -ppassword -e "show databases;": exit status 1 (250.753033ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 14:28:35.750300 1203639 retry.go:31] will retry after 1.036764273s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-905978 exec mysql-58ccfd96bb-vpw2b -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-905978 exec mysql-58ccfd96bb-vpw2b -- mysql -ppassword -e "show databases;": exit status 1 (123.61377ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 14:28:36.911916 1203639 retry.go:31] will retry after 2.747966863s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-905978 exec mysql-58ccfd96bb-vpw2b -- mysql -ppassword -e "show databases;"
E0414 14:28:42.541428 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/MySQL (27.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1203639/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo cat /etc/test/nested/copy/1203639/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1203639.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo cat /etc/ssl/certs/1203639.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1203639.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo cat /usr/share/ca-certificates/1203639.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/12036392.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo cat /etc/ssl/certs/12036392.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/12036392.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo cat /usr/share/ca-certificates/12036392.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-905978 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-905978 ssh "sudo systemctl is-active docker": exit status 1 (228.29466ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-905978 ssh "sudo systemctl is-active crio": exit status 1 (233.18766ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-905978 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-905978
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kicbase/echo-server:functional-905978
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-905978 image ls --format short --alsologtostderr:
I0414 14:28:24.351239 1212677 out.go:345] Setting OutFile to fd 1 ...
I0414 14:28:24.351400 1212677 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:24.351406 1212677 out.go:358] Setting ErrFile to fd 2...
I0414 14:28:24.351410 1212677 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:24.351606 1212677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
I0414 14:28:24.352195 1212677 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:28:24.352292 1212677 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:28:24.352646 1212677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:24.352722 1212677 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:24.368405 1212677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
I0414 14:28:24.368970 1212677 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:24.369531 1212677 main.go:141] libmachine: Using API Version  1
I0414 14:28:24.369556 1212677 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:24.369939 1212677 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:24.370178 1212677 main.go:141] libmachine: (functional-905978) Calling .GetState
I0414 14:28:24.372016 1212677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:24.372061 1212677 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:24.388106 1212677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
I0414 14:28:24.388602 1212677 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:24.389130 1212677 main.go:141] libmachine: Using API Version  1
I0414 14:28:24.389169 1212677 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:24.389524 1212677 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:24.389731 1212677 main.go:141] libmachine: (functional-905978) Calling .DriverName
I0414 14:28:24.389931 1212677 ssh_runner.go:195] Run: systemctl --version
I0414 14:28:24.389964 1212677 main.go:141] libmachine: (functional-905978) Calling .GetSSHHostname
I0414 14:28:24.392769 1212677 main.go:141] libmachine: (functional-905978) DBG | domain functional-905978 has defined MAC address 52:54:00:22:23:40 in network mk-functional-905978
I0414 14:28:24.393227 1212677 main.go:141] libmachine: (functional-905978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:23:40", ip: ""} in network mk-functional-905978: {Iface:virbr1 ExpiryTime:2025-04-14 15:25:36 +0000 UTC Type:0 Mac:52:54:00:22:23:40 Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:functional-905978 Clientid:01:52:54:00:22:23:40}
I0414 14:28:24.393268 1212677 main.go:141] libmachine: (functional-905978) DBG | domain functional-905978 has defined IP address 192.168.50.67 and MAC address 52:54:00:22:23:40 in network mk-functional-905978
I0414 14:28:24.393384 1212677 main.go:141] libmachine: (functional-905978) Calling .GetSSHPort
I0414 14:28:24.393545 1212677 main.go:141] libmachine: (functional-905978) Calling .GetSSHKeyPath
I0414 14:28:24.393698 1212677 main.go:141] libmachine: (functional-905978) Calling .GetSSHUsername
I0414 14:28:24.393823 1212677 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/functional-905978/id_rsa Username:docker}
I0414 14:28:24.481662 1212677 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:28:24.528390 1212677 main.go:141] libmachine: Making call to close driver server
I0414 14:28:24.528407 1212677 main.go:141] libmachine: (functional-905978) Calling .Close
I0414 14:28:24.528756 1212677 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:28:24.528788 1212677 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:28:24.528797 1212677 main.go:141] libmachine: (functional-905978) DBG | Closing plugin on server side
I0414 14:28:24.528803 1212677 main.go:141] libmachine: Making call to close driver server
I0414 14:28:24.528941 1212677 main.go:141] libmachine: (functional-905978) Calling .Close
I0414 14:28:24.529213 1212677 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:28:24.529228 1212677 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-905978 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-905978  | sha256:9056ab | 2.37MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-apiserver              | v1.32.2            | sha256:85b7a1 | 28.7MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20241212-9f82dd49 | sha256:d30084 | 39MB   |
| docker.io/library/minikube-local-cache-test | functional-905978  | sha256:7f8f8e | 991B   |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/kube-controller-manager     | v1.32.2            | sha256:b6a454 | 26.3MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/nginx                     | latest             | sha256:4cad75 | 72.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| localhost/my-image                          | functional-905978  | sha256:cffdda | 775kB  |
| registry.k8s.io/kube-proxy                  | v1.32.2            | sha256:f13328 | 30.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.32.2            | sha256:d8e673 | 20.7MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-905978 image ls --format table --alsologtostderr:
I0414 14:28:30.168024 1212845 out.go:345] Setting OutFile to fd 1 ...
I0414 14:28:30.168177 1212845 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:30.168189 1212845 out.go:358] Setting ErrFile to fd 2...
I0414 14:28:30.168195 1212845 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:30.168463 1212845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
I0414 14:28:30.169143 1212845 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:28:30.169249 1212845 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:28:30.169619 1212845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:30.169731 1212845 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:30.185886 1212845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42793
I0414 14:28:30.186481 1212845 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:30.187119 1212845 main.go:141] libmachine: Using API Version  1
I0414 14:28:30.187150 1212845 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:30.187575 1212845 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:30.187832 1212845 main.go:141] libmachine: (functional-905978) Calling .GetState
I0414 14:28:30.190031 1212845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:30.190109 1212845 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:30.206645 1212845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37821
I0414 14:28:30.207164 1212845 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:30.207648 1212845 main.go:141] libmachine: Using API Version  1
I0414 14:28:30.207670 1212845 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:30.208036 1212845 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:30.208240 1212845 main.go:141] libmachine: (functional-905978) Calling .DriverName
I0414 14:28:30.208440 1212845 ssh_runner.go:195] Run: systemctl --version
I0414 14:28:30.208478 1212845 main.go:141] libmachine: (functional-905978) Calling .GetSSHHostname
I0414 14:28:30.211827 1212845 main.go:141] libmachine: (functional-905978) DBG | domain functional-905978 has defined MAC address 52:54:00:22:23:40 in network mk-functional-905978
I0414 14:28:30.212285 1212845 main.go:141] libmachine: (functional-905978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:23:40", ip: ""} in network mk-functional-905978: {Iface:virbr1 ExpiryTime:2025-04-14 15:25:36 +0000 UTC Type:0 Mac:52:54:00:22:23:40 Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:functional-905978 Clientid:01:52:54:00:22:23:40}
I0414 14:28:30.212315 1212845 main.go:141] libmachine: (functional-905978) DBG | domain functional-905978 has defined IP address 192.168.50.67 and MAC address 52:54:00:22:23:40 in network mk-functional-905978
I0414 14:28:30.212422 1212845 main.go:141] libmachine: (functional-905978) Calling .GetSSHPort
I0414 14:28:30.212585 1212845 main.go:141] libmachine: (functional-905978) Calling .GetSSHKeyPath
I0414 14:28:30.212786 1212845 main.go:141] libmachine: (functional-905978) Calling .GetSSHUsername
I0414 14:28:30.212965 1212845 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/functional-905978/id_rsa Username:docker}
I0414 14:28:30.294333 1212845 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:28:30.351665 1212845 main.go:141] libmachine: Making call to close driver server
I0414 14:28:30.351686 1212845 main.go:141] libmachine: (functional-905978) Calling .Close
I0414 14:28:30.352003 1212845 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:28:30.352030 1212845 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:28:30.352043 1212845 main.go:141] libmachine: Making call to close driver server
I0414 14:28:30.352051 1212845 main.go:141] libmachine: (functional-905978) Calling .Close
I0414 14:28:30.352740 1212845 main.go:141] libmachine: (functional-905978) DBG | Closing plugin on server side
I0414 14:28:30.352792 1212845 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:28:30.352835 1212845 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-905978 image ls --format json --alsologtostderr:
[{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-905978"],"size":"2372971"},{"id":"sha256:7f8f8ec86f2c919893487fbaa9dd8ab438165545ee5d604699175638e8b4be0d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-905978"],"size":"991"},{"id":"sha256:4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485","repoDigests":["docker.io/library/n
ginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab"],"repoTags":["docker.io/library/nginx:latest"],"size":"72207578"},{"id":"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"28670731"},{"id":"sha256:cffdda17b84045e45953b30e2496ea68a19816b68e40fd40201ca0438b4cd332","repoDigests":[],"repoTags":["localhost/my-image:functional-905978"],"size":"774889"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d0
2f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"30907858"},{"id":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"20657902"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","re
poDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"39008320"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"262593
92"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-905978 image ls --format json --alsologtostderr:
I0414 14:28:29.930488 1212820 out.go:345] Setting OutFile to fd 1 ...
I0414 14:28:29.930582 1212820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:29.930589 1212820 out.go:358] Setting ErrFile to fd 2...
I0414 14:28:29.930593 1212820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:29.930776 1212820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
I0414 14:28:29.931486 1212820 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:28:29.931679 1212820 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:28:29.932128 1212820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:29.932199 1212820 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:29.948238 1212820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40399
I0414 14:28:29.948706 1212820 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:29.949321 1212820 main.go:141] libmachine: Using API Version  1
I0414 14:28:29.949344 1212820 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:29.949723 1212820 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:29.949929 1212820 main.go:141] libmachine: (functional-905978) Calling .GetState
I0414 14:28:29.952130 1212820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:29.952194 1212820 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:29.968264 1212820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39387
I0414 14:28:29.968781 1212820 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:29.969294 1212820 main.go:141] libmachine: Using API Version  1
I0414 14:28:29.969321 1212820 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:29.969744 1212820 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:29.969991 1212820 main.go:141] libmachine: (functional-905978) Calling .DriverName
I0414 14:28:29.970222 1212820 ssh_runner.go:195] Run: systemctl --version
I0414 14:28:29.970249 1212820 main.go:141] libmachine: (functional-905978) Calling .GetSSHHostname
I0414 14:28:29.973309 1212820 main.go:141] libmachine: (functional-905978) DBG | domain functional-905978 has defined MAC address 52:54:00:22:23:40 in network mk-functional-905978
I0414 14:28:29.973749 1212820 main.go:141] libmachine: (functional-905978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:23:40", ip: ""} in network mk-functional-905978: {Iface:virbr1 ExpiryTime:2025-04-14 15:25:36 +0000 UTC Type:0 Mac:52:54:00:22:23:40 Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:functional-905978 Clientid:01:52:54:00:22:23:40}
I0414 14:28:29.973772 1212820 main.go:141] libmachine: (functional-905978) DBG | domain functional-905978 has defined IP address 192.168.50.67 and MAC address 52:54:00:22:23:40 in network mk-functional-905978
I0414 14:28:29.973921 1212820 main.go:141] libmachine: (functional-905978) Calling .GetSSHPort
I0414 14:28:29.974169 1212820 main.go:141] libmachine: (functional-905978) Calling .GetSSHKeyPath
I0414 14:28:29.974334 1212820 main.go:141] libmachine: (functional-905978) Calling .GetSSHUsername
I0414 14:28:29.974463 1212820 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/functional-905978/id_rsa Username:docker}
I0414 14:28:30.063186 1212820 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:28:30.113208 1212820 main.go:141] libmachine: Making call to close driver server
I0414 14:28:30.113228 1212820 main.go:141] libmachine: (functional-905978) Calling .Close
I0414 14:28:30.113515 1212820 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:28:30.113539 1212820 main.go:141] libmachine: (functional-905978) DBG | Closing plugin on server side
I0414 14:28:30.113542 1212820 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:28:30.113555 1212820 main.go:141] libmachine: Making call to close driver server
I0414 14:28:30.113567 1212820 main.go:141] libmachine: (functional-905978) Calling .Close
I0414 14:28:30.113797 1212820 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:28:30.113813 1212820 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-905978 image ls --format yaml --alsologtostderr:
- id: sha256:4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485
repoDigests:
- docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab
repoTags:
- docker.io/library/nginx:latest
size: "72207578"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "26259392"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "39008320"
- id: sha256:7f8f8ec86f2c919893487fbaa9dd8ab438165545ee5d604699175638e8b4be0d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-905978
size: "991"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "28670731"
- id: sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "20657902"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "30907858"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-905978
size: "2372971"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-905978 image ls --format yaml --alsologtostderr:
I0414 14:28:24.587565 1212701 out.go:345] Setting OutFile to fd 1 ...
I0414 14:28:24.588240 1212701 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:24.588298 1212701 out.go:358] Setting ErrFile to fd 2...
I0414 14:28:24.588314 1212701 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:24.588817 1212701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
I0414 14:28:24.590153 1212701 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:28:24.590302 1212701 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:28:24.590878 1212701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:24.590962 1212701 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:24.606815 1212701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
I0414 14:28:24.607339 1212701 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:24.607834 1212701 main.go:141] libmachine: Using API Version  1
I0414 14:28:24.607857 1212701 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:24.608240 1212701 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:24.608440 1212701 main.go:141] libmachine: (functional-905978) Calling .GetState
I0414 14:28:24.610652 1212701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:24.610701 1212701 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:24.626629 1212701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
I0414 14:28:24.627149 1212701 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:24.627685 1212701 main.go:141] libmachine: Using API Version  1
I0414 14:28:24.627718 1212701 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:24.628150 1212701 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:24.628397 1212701 main.go:141] libmachine: (functional-905978) Calling .DriverName
I0414 14:28:24.628591 1212701 ssh_runner.go:195] Run: systemctl --version
I0414 14:28:24.628617 1212701 main.go:141] libmachine: (functional-905978) Calling .GetSSHHostname
I0414 14:28:24.630876 1212701 main.go:141] libmachine: (functional-905978) DBG | domain functional-905978 has defined MAC address 52:54:00:22:23:40 in network mk-functional-905978
I0414 14:28:24.631346 1212701 main.go:141] libmachine: (functional-905978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:23:40", ip: ""} in network mk-functional-905978: {Iface:virbr1 ExpiryTime:2025-04-14 15:25:36 +0000 UTC Type:0 Mac:52:54:00:22:23:40 Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:functional-905978 Clientid:01:52:54:00:22:23:40}
I0414 14:28:24.631371 1212701 main.go:141] libmachine: (functional-905978) DBG | domain functional-905978 has defined IP address 192.168.50.67 and MAC address 52:54:00:22:23:40 in network mk-functional-905978
I0414 14:28:24.631506 1212701 main.go:141] libmachine: (functional-905978) Calling .GetSSHPort
I0414 14:28:24.631647 1212701 main.go:141] libmachine: (functional-905978) Calling .GetSSHKeyPath
I0414 14:28:24.631801 1212701 main.go:141] libmachine: (functional-905978) Calling .GetSSHUsername
I0414 14:28:24.631966 1212701 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/functional-905978/id_rsa Username:docker}
I0414 14:28:24.719051 1212701 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:28:24.763062 1212701 main.go:141] libmachine: Making call to close driver server
I0414 14:28:24.763081 1212701 main.go:141] libmachine: (functional-905978) Calling .Close
I0414 14:28:24.763416 1212701 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:28:24.763433 1212701 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:28:24.763441 1212701 main.go:141] libmachine: Making call to close driver server
I0414 14:28:24.763449 1212701 main.go:141] libmachine: (functional-905978) Calling .Close
I0414 14:28:24.763469 1212701 main.go:141] libmachine: (functional-905978) DBG | Closing plugin on server side
I0414 14:28:24.763725 1212701 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:28:24.763810 1212701 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:28:24.763783 1212701 main.go:141] libmachine: (functional-905978) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-905978 ssh pgrep buildkitd: exit status 1 (195.748923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image build -t localhost/my-image:functional-905978 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-905978 image build -t localhost/my-image:functional-905978 testdata/build --alsologtostderr: (4.692640339s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-905978 image build -t localhost/my-image:functional-905978 testdata/build --alsologtostderr:
I0414 14:28:25.012781 1212756 out.go:345] Setting OutFile to fd 1 ...
I0414 14:28:25.013029 1212756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:25.013037 1212756 out.go:358] Setting ErrFile to fd 2...
I0414 14:28:25.013040 1212756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:28:25.013259 1212756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
I0414 14:28:25.013789 1212756 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:28:25.014444 1212756 config.go:182] Loaded profile config "functional-905978": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 14:28:25.014939 1212756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:25.015011 1212756 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:25.031795 1212756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
I0414 14:28:25.032379 1212756 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:25.032995 1212756 main.go:141] libmachine: Using API Version  1
I0414 14:28:25.033022 1212756 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:25.033484 1212756 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:25.033740 1212756 main.go:141] libmachine: (functional-905978) Calling .GetState
I0414 14:28:25.035604 1212756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0414 14:28:25.035649 1212756 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:28:25.052500 1212756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
I0414 14:28:25.053012 1212756 main.go:141] libmachine: () Calling .GetVersion
I0414 14:28:25.053543 1212756 main.go:141] libmachine: Using API Version  1
I0414 14:28:25.053575 1212756 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:28:25.053932 1212756 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:28:25.054126 1212756 main.go:141] libmachine: (functional-905978) Calling .DriverName
I0414 14:28:25.054335 1212756 ssh_runner.go:195] Run: systemctl --version
I0414 14:28:25.054366 1212756 main.go:141] libmachine: (functional-905978) Calling .GetSSHHostname
I0414 14:28:25.057203 1212756 main.go:141] libmachine: (functional-905978) DBG | domain functional-905978 has defined MAC address 52:54:00:22:23:40 in network mk-functional-905978
I0414 14:28:25.057682 1212756 main.go:141] libmachine: (functional-905978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:23:40", ip: ""} in network mk-functional-905978: {Iface:virbr1 ExpiryTime:2025-04-14 15:25:36 +0000 UTC Type:0 Mac:52:54:00:22:23:40 Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:functional-905978 Clientid:01:52:54:00:22:23:40}
I0414 14:28:25.057718 1212756 main.go:141] libmachine: (functional-905978) DBG | domain functional-905978 has defined IP address 192.168.50.67 and MAC address 52:54:00:22:23:40 in network mk-functional-905978
I0414 14:28:25.057839 1212756 main.go:141] libmachine: (functional-905978) Calling .GetSSHPort
I0414 14:28:25.058026 1212756 main.go:141] libmachine: (functional-905978) Calling .GetSSHKeyPath
I0414 14:28:25.058172 1212756 main.go:141] libmachine: (functional-905978) Calling .GetSSHUsername
I0414 14:28:25.058332 1212756 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/functional-905978/id_rsa Username:docker}
I0414 14:28:25.142812 1212756 build_images.go:161] Building image from path: /tmp/build.3341371004.tar
I0414 14:28:25.142896 1212756 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0414 14:28:25.156245 1212756 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3341371004.tar
I0414 14:28:25.162093 1212756 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3341371004.tar: stat -c "%s %y" /var/lib/minikube/build/build.3341371004.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3341371004.tar': No such file or directory
I0414 14:28:25.162125 1212756 ssh_runner.go:362] scp /tmp/build.3341371004.tar --> /var/lib/minikube/build/build.3341371004.tar (3072 bytes)
I0414 14:28:25.188174 1212756 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3341371004
I0414 14:28:25.198099 1212756 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3341371004 -xf /var/lib/minikube/build/build.3341371004.tar
I0414 14:28:25.212460 1212756 containerd.go:394] Building image: /var/lib/minikube/build/build.3341371004
I0414 14:28:25.212572 1212756 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3341371004 --local dockerfile=/var/lib/minikube/build/build.3341371004 --output type=image,name=localhost/my-image:functional-905978
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:b5b31618a64586334e18a44734acadffd380ca16b8c1ea18e819b46add4a6e19 0.0s done
#8 exporting config sha256:cffdda17b84045e45953b30e2496ea68a19816b68e40fd40201ca0438b4cd332 0.0s done
#8 naming to localhost/my-image:functional-905978 done
#8 DONE 0.3s
I0414 14:28:29.618625 1212756 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3341371004 --local dockerfile=/var/lib/minikube/build/build.3341371004 --output type=image,name=localhost/my-image:functional-905978: (4.406007436s)
I0414 14:28:29.618713 1212756 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3341371004
I0414 14:28:29.634339 1212756 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3341371004.tar
I0414 14:28:29.653260 1212756 build_images.go:217] Built localhost/my-image:functional-905978 from /tmp/build.3341371004.tar
I0414 14:28:29.653303 1212756 build_images.go:133] succeeded building to: functional-905978
I0414 14:28:29.653308 1212756 build_images.go:134] failed building to: 
I0414 14:28:29.653380 1212756 main.go:141] libmachine: Making call to close driver server
I0414 14:28:29.653396 1212756 main.go:141] libmachine: (functional-905978) Calling .Close
I0414 14:28:29.653706 1212756 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:28:29.653727 1212756 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:28:29.653735 1212756 main.go:141] libmachine: Making call to close driver server
I0414 14:28:29.653743 1212756 main.go:141] libmachine: (functional-905978) Calling .Close
I0414 14:28:29.654015 1212756 main.go:141] libmachine: (functional-905978) DBG | Closing plugin on server side
I0414 14:28:29.654061 1212756 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:28:29.654087 1212756 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.684553242s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-905978
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-905978 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-905978 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-ktgvc" [60449ec2-01e2-4065-88df-1bbddae75e3f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-ktgvc" [60449ec2-01e2-4065-88df-1bbddae75e3f] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003079864s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image load --daemon kicbase/echo-server:functional-905978 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-905978 image load --daemon kicbase/echo-server:functional-905978 --alsologtostderr: (1.177523211s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image load --daemon kicbase/echo-server:functional-905978 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-linux-amd64 -p functional-905978 image load --daemon kicbase/echo-server:functional-905978 --alsologtostderr: (1.106095442s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-905978
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image load --daemon kicbase/echo-server:functional-905978 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image save kicbase/echo-server:functional-905978 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image rm kicbase/echo-server:functional-905978 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-905978
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 image save --daemon kicbase/echo-server:functional-905978 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-905978
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "277.52454ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "59.311322ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "278.285511ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "49.690502ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdany-port2945775728/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744640889024263482" to /tmp/TestFunctionalparallelMountCmdany-port2945775728/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744640889024263482" to /tmp/TestFunctionalparallelMountCmdany-port2945775728/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744640889024263482" to /tmp/TestFunctionalparallelMountCmdany-port2945775728/001/test-1744640889024263482
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.902197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 14:28:09.236472 1203639 retry.go:31] will retry after 671.34083ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 14 14:28 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 14 14:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 14 14:28 test-1744640889024263482
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh cat /mount-9p/test-1744640889024263482
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-905978 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ce7bc417-b0bf-4c9b-bea7-fc1f277798cd] Pending
helpers_test.go:344: "busybox-mount" [ce7bc417-b0bf-4c9b-bea7-fc1f277798cd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ce7bc417-b0bf-4c9b-bea7-fc1f277798cd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ce7bc417-b0bf-4c9b-bea7-fc1f277798cd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.005199815s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-905978 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdany-port2945775728/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 service list -o json
functional_test.go:1511: Took "337.019167ms" to run "out/minikube-linux-amd64 -p functional-905978 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.50.67:31852
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.50.67:31852
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdspecific-port1389122606/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.692153ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 14:28:20.899907 1203639 retry.go:31] will retry after 371.662263ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdspecific-port1389122606/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-905978 ssh "sudo umount -f /mount-9p": exit status 1 (229.338989ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-905978 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdspecific-port1389122606/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T" /mount1: exit status 1 (316.673761ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 14:28:22.626902 1203639 retry.go:31] will retry after 442.158677ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-905978 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-905978 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-905978 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516571382/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-905978
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-905978
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-905978
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-290859 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-147046 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0414 14:59:01.749727 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-147046 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (52.685812704s)
--- PASS: TestJSONOutput/start/Command (52.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-147046 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-147046 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.47s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-147046 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-147046 --output=json --user=testUser: (6.467221722s)
--- PASS: TestJSONOutput/stop/Command (6.47s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-669804 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-669804 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.047441ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d3cb425-df3c-40ee-bbeb-2443fc35fa4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-669804] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9e53f50-b517-4346-8682-f29b759501d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20512"}}
	{"specversion":"1.0","id":"7453778e-d91f-4ffe-9940-296e04eaec3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fdd00d06-097b-4b63-825f-e44e2d9ee476","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig"}}
	{"specversion":"1.0","id":"3db93d23-d431-4242-a6b6-5603756ed438","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube"}}
	{"specversion":"1.0","id":"af4b8c75-f232-451e-8f94-c5e45326483d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f9c192f8-3e8f-4e34-a789-c7b5c5ee4e56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eefec4e0-2f9e-4a36-b5e6-8419b51ce4d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-669804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-669804
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (89.39s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-573140 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-573140 --driver=kvm2  --container-runtime=containerd: (43.37188578s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-596640 --driver=kvm2  --container-runtime=containerd
E0414 15:00:58.679740 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-596640 --driver=kvm2  --container-runtime=containerd: (42.933269498s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-573140
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-596640
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-596640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-596640
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-596640: (1.000394907s)
helpers_test.go:175: Cleaning up "first-573140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-573140
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-573140: (1.015010907s)
--- PASS: TestMinikubeProfile (89.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-728696 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-728696 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.316864572s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-728696 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-728696 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-747277 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-747277 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.743395178s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747277 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747277 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-728696 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747277 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747277 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-747277
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-747277: (2.276758589s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-747277
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-747277: (22.748797785s)
--- PASS: TestMountStart/serial/RestartStopped (23.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747277 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747277 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-906041 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0414 15:02:59.575569 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-906041 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m54.728851661s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-906041 -- rollout status deployment/busybox: (4.673501644s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- exec busybox-58667487b6-gbbpl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- exec busybox-58667487b6-txbl4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- exec busybox-58667487b6-gbbpl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- exec busybox-58667487b6-txbl4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- exec busybox-58667487b6-gbbpl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- exec busybox-58667487b6-txbl4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- exec busybox-58667487b6-gbbpl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- exec busybox-58667487b6-gbbpl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- exec busybox-58667487b6-txbl4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-906041 -- exec busybox-58667487b6-txbl4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-906041 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-906041 -v 3 --alsologtostderr: (51.206013057s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-906041 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp testdata/cp-test.txt multinode-906041:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp multinode-906041:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3081977770/001/cp-test_multinode-906041.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp multinode-906041:/home/docker/cp-test.txt multinode-906041-m02:/home/docker/cp-test_multinode-906041_multinode-906041-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m02 "sudo cat /home/docker/cp-test_multinode-906041_multinode-906041-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp multinode-906041:/home/docker/cp-test.txt multinode-906041-m03:/home/docker/cp-test_multinode-906041_multinode-906041-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m03 "sudo cat /home/docker/cp-test_multinode-906041_multinode-906041-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp testdata/cp-test.txt multinode-906041-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp multinode-906041-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3081977770/001/cp-test_multinode-906041-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp multinode-906041-m02:/home/docker/cp-test.txt multinode-906041:/home/docker/cp-test_multinode-906041-m02_multinode-906041.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041 "sudo cat /home/docker/cp-test_multinode-906041-m02_multinode-906041.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp multinode-906041-m02:/home/docker/cp-test.txt multinode-906041-m03:/home/docker/cp-test_multinode-906041-m02_multinode-906041-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m03 "sudo cat /home/docker/cp-test_multinode-906041-m02_multinode-906041-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp testdata/cp-test.txt multinode-906041-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp multinode-906041-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3081977770/001/cp-test_multinode-906041-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp multinode-906041-m03:/home/docker/cp-test.txt multinode-906041:/home/docker/cp-test_multinode-906041-m03_multinode-906041.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041 "sudo cat /home/docker/cp-test_multinode-906041-m03_multinode-906041.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 cp multinode-906041-m03:/home/docker/cp-test.txt multinode-906041-m02:/home/docker/cp-test_multinode-906041-m03_multinode-906041-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 ssh -n multinode-906041-m02 "sudo cat /home/docker/cp-test_multinode-906041-m03_multinode-906041-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-906041 node stop m03: (1.297359888s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-906041 status: exit status 7 (423.900208ms)

                                                
                                                
-- stdout --
	multinode-906041
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-906041-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-906041-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-906041 status --alsologtostderr: exit status 7 (434.697135ms)

                                                
                                                
-- stdout --
	multinode-906041
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-906041-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-906041-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:05:51.190951 1229674 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:05:51.191056 1229674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:05:51.191061 1229674 out.go:358] Setting ErrFile to fd 2...
	I0414 15:05:51.191066 1229674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:05:51.191357 1229674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 15:05:51.191595 1229674 out.go:352] Setting JSON to false
	I0414 15:05:51.191656 1229674 mustload.go:65] Loading cluster: multinode-906041
	I0414 15:05:51.191788 1229674 notify.go:220] Checking for updates...
	I0414 15:05:51.192241 1229674 config.go:182] Loaded profile config "multinode-906041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 15:05:51.192276 1229674 status.go:174] checking status of multinode-906041 ...
	I0414 15:05:51.192772 1229674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 15:05:51.192831 1229674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:05:51.210312 1229674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0414 15:05:51.210821 1229674 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:05:51.211479 1229674 main.go:141] libmachine: Using API Version  1
	I0414 15:05:51.211514 1229674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:05:51.211901 1229674 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:05:51.212121 1229674 main.go:141] libmachine: (multinode-906041) Calling .GetState
	I0414 15:05:51.213985 1229674 status.go:371] multinode-906041 host status = "Running" (err=<nil>)
	I0414 15:05:51.214005 1229674 host.go:66] Checking if "multinode-906041" exists ...
	I0414 15:05:51.214354 1229674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 15:05:51.214401 1229674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:05:51.230615 1229674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I0414 15:05:51.231174 1229674 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:05:51.231673 1229674 main.go:141] libmachine: Using API Version  1
	I0414 15:05:51.231698 1229674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:05:51.232082 1229674 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:05:51.232286 1229674 main.go:141] libmachine: (multinode-906041) Calling .GetIP
	I0414 15:05:51.235150 1229674 main.go:141] libmachine: (multinode-906041) DBG | domain multinode-906041 has defined MAC address 52:54:00:60:b8:15 in network mk-multinode-906041
	I0414 15:05:51.235542 1229674 main.go:141] libmachine: (multinode-906041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:b8:15", ip: ""} in network mk-multinode-906041: {Iface:virbr1 ExpiryTime:2025-04-14 16:03:02 +0000 UTC Type:0 Mac:52:54:00:60:b8:15 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-906041 Clientid:01:52:54:00:60:b8:15}
	I0414 15:05:51.235564 1229674 main.go:141] libmachine: (multinode-906041) DBG | domain multinode-906041 has defined IP address 192.168.39.77 and MAC address 52:54:00:60:b8:15 in network mk-multinode-906041
	I0414 15:05:51.235682 1229674 host.go:66] Checking if "multinode-906041" exists ...
	I0414 15:05:51.235995 1229674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 15:05:51.236053 1229674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:05:51.251644 1229674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39223
	I0414 15:05:51.252057 1229674 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:05:51.252488 1229674 main.go:141] libmachine: Using API Version  1
	I0414 15:05:51.252510 1229674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:05:51.252833 1229674 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:05:51.253015 1229674 main.go:141] libmachine: (multinode-906041) Calling .DriverName
	I0414 15:05:51.253173 1229674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 15:05:51.253200 1229674 main.go:141] libmachine: (multinode-906041) Calling .GetSSHHostname
	I0414 15:05:51.255958 1229674 main.go:141] libmachine: (multinode-906041) DBG | domain multinode-906041 has defined MAC address 52:54:00:60:b8:15 in network mk-multinode-906041
	I0414 15:05:51.256364 1229674 main.go:141] libmachine: (multinode-906041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:b8:15", ip: ""} in network mk-multinode-906041: {Iface:virbr1 ExpiryTime:2025-04-14 16:03:02 +0000 UTC Type:0 Mac:52:54:00:60:b8:15 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-906041 Clientid:01:52:54:00:60:b8:15}
	I0414 15:05:51.256402 1229674 main.go:141] libmachine: (multinode-906041) DBG | domain multinode-906041 has defined IP address 192.168.39.77 and MAC address 52:54:00:60:b8:15 in network mk-multinode-906041
	I0414 15:05:51.256478 1229674 main.go:141] libmachine: (multinode-906041) Calling .GetSSHPort
	I0414 15:05:51.256654 1229674 main.go:141] libmachine: (multinode-906041) Calling .GetSSHKeyPath
	I0414 15:05:51.256824 1229674 main.go:141] libmachine: (multinode-906041) Calling .GetSSHUsername
	I0414 15:05:51.256955 1229674 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/multinode-906041/id_rsa Username:docker}
	I0414 15:05:51.342463 1229674 ssh_runner.go:195] Run: systemctl --version
	I0414 15:05:51.348165 1229674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:05:51.362630 1229674 kubeconfig.go:125] found "multinode-906041" server: "https://192.168.39.77:8443"
	I0414 15:05:51.362675 1229674 api_server.go:166] Checking apiserver status ...
	I0414 15:05:51.362713 1229674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:05:51.376682 1229674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1100/cgroup
	W0414 15:05:51.389587 1229674 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1100/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 15:05:51.389669 1229674 ssh_runner.go:195] Run: ls
	I0414 15:05:51.394135 1229674 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I0414 15:05:51.398194 1229674 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I0414 15:05:51.398220 1229674 status.go:463] multinode-906041 apiserver status = Running (err=<nil>)
	I0414 15:05:51.398233 1229674 status.go:176] multinode-906041 status: &{Name:multinode-906041 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 15:05:51.398259 1229674 status.go:174] checking status of multinode-906041-m02 ...
	I0414 15:05:51.398655 1229674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 15:05:51.398704 1229674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:05:51.415147 1229674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0414 15:05:51.415671 1229674 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:05:51.416161 1229674 main.go:141] libmachine: Using API Version  1
	I0414 15:05:51.416187 1229674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:05:51.416521 1229674 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:05:51.416727 1229674 main.go:141] libmachine: (multinode-906041-m02) Calling .GetState
	I0414 15:05:51.418491 1229674 status.go:371] multinode-906041-m02 host status = "Running" (err=<nil>)
	I0414 15:05:51.418512 1229674 host.go:66] Checking if "multinode-906041-m02" exists ...
	I0414 15:05:51.418817 1229674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 15:05:51.418896 1229674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:05:51.434521 1229674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36185
	I0414 15:05:51.434944 1229674 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:05:51.435397 1229674 main.go:141] libmachine: Using API Version  1
	I0414 15:05:51.435423 1229674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:05:51.435782 1229674 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:05:51.435951 1229674 main.go:141] libmachine: (multinode-906041-m02) Calling .GetIP
	I0414 15:05:51.438742 1229674 main.go:141] libmachine: (multinode-906041-m02) DBG | domain multinode-906041-m02 has defined MAC address 52:54:00:37:f1:65 in network mk-multinode-906041
	I0414 15:05:51.439159 1229674 main.go:141] libmachine: (multinode-906041-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:f1:65", ip: ""} in network mk-multinode-906041: {Iface:virbr1 ExpiryTime:2025-04-14 16:04:07 +0000 UTC Type:0 Mac:52:54:00:37:f1:65 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-906041-m02 Clientid:01:52:54:00:37:f1:65}
	I0414 15:05:51.439191 1229674 main.go:141] libmachine: (multinode-906041-m02) DBG | domain multinode-906041-m02 has defined IP address 192.168.39.187 and MAC address 52:54:00:37:f1:65 in network mk-multinode-906041
	I0414 15:05:51.439394 1229674 host.go:66] Checking if "multinode-906041-m02" exists ...
	I0414 15:05:51.439800 1229674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 15:05:51.439852 1229674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:05:51.455852 1229674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0414 15:05:51.456348 1229674 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:05:51.456822 1229674 main.go:141] libmachine: Using API Version  1
	I0414 15:05:51.456839 1229674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:05:51.457151 1229674 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:05:51.457344 1229674 main.go:141] libmachine: (multinode-906041-m02) Calling .DriverName
	I0414 15:05:51.457585 1229674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 15:05:51.457612 1229674 main.go:141] libmachine: (multinode-906041-m02) Calling .GetSSHHostname
	I0414 15:05:51.460367 1229674 main.go:141] libmachine: (multinode-906041-m02) DBG | domain multinode-906041-m02 has defined MAC address 52:54:00:37:f1:65 in network mk-multinode-906041
	I0414 15:05:51.460822 1229674 main.go:141] libmachine: (multinode-906041-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:f1:65", ip: ""} in network mk-multinode-906041: {Iface:virbr1 ExpiryTime:2025-04-14 16:04:07 +0000 UTC Type:0 Mac:52:54:00:37:f1:65 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-906041-m02 Clientid:01:52:54:00:37:f1:65}
	I0414 15:05:51.460847 1229674 main.go:141] libmachine: (multinode-906041-m02) DBG | domain multinode-906041-m02 has defined IP address 192.168.39.187 and MAC address 52:54:00:37:f1:65 in network mk-multinode-906041
	I0414 15:05:51.461027 1229674 main.go:141] libmachine: (multinode-906041-m02) Calling .GetSSHPort
	I0414 15:05:51.461206 1229674 main.go:141] libmachine: (multinode-906041-m02) Calling .GetSSHKeyPath
	I0414 15:05:51.461352 1229674 main.go:141] libmachine: (multinode-906041-m02) Calling .GetSSHUsername
	I0414 15:05:51.461469 1229674 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1196368/.minikube/machines/multinode-906041-m02/id_rsa Username:docker}
	I0414 15:05:51.541945 1229674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:05:51.555639 1229674 status.go:176] multinode-906041-m02 status: &{Name:multinode-906041-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0414 15:05:51.555678 1229674 status.go:174] checking status of multinode-906041-m03 ...
	I0414 15:05:51.556052 1229674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 15:05:51.556105 1229674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:05:51.572448 1229674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42751
	I0414 15:05:51.572910 1229674 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:05:51.573409 1229674 main.go:141] libmachine: Using API Version  1
	I0414 15:05:51.573426 1229674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:05:51.573786 1229674 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:05:51.574044 1229674 main.go:141] libmachine: (multinode-906041-m03) Calling .GetState
	I0414 15:05:51.576264 1229674 status.go:371] multinode-906041-m03 host status = "Stopped" (err=<nil>)
	I0414 15:05:51.576286 1229674 status.go:384] host is not running, skipping remaining checks
	I0414 15:05:51.576293 1229674 status.go:176] multinode-906041-m03 status: &{Name:multinode-906041-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 node start m03 -v=7 --alsologtostderr
E0414 15:05:58.683478 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:06:02.647423 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-906041 node start m03 -v=7 --alsologtostderr: (32.008666678s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (325.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-906041
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-906041
E0414 15:07:59.574774 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-906041: (3m3.126663299s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-906041 --wait=true -v=8 --alsologtostderr
E0414 15:10:58.679771 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-906041 --wait=true -v=8 --alsologtostderr: (2m21.999572482s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-906041
--- PASS: TestMultiNode/serial/RestartKeepsNodes (325.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-906041 node delete m03: (1.660798952s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 stop
E0414 15:12:59.574770 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-906041 stop: (3m1.478818979s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-906041 status: exit status 7 (87.635498ms)

                                                
                                                
-- stdout --
	multinode-906041
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-906041-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-906041 status --alsologtostderr: exit status 7 (86.912494ms)

                                                
                                                
-- stdout --
	multinode-906041
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-906041-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:14:53.263113 1232425 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:14:53.263225 1232425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:14:53.263235 1232425 out.go:358] Setting ErrFile to fd 2...
	I0414 15:14:53.263240 1232425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:14:53.263464 1232425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 15:14:53.263632 1232425 out.go:352] Setting JSON to false
	I0414 15:14:53.263665 1232425 mustload.go:65] Loading cluster: multinode-906041
	I0414 15:14:53.263722 1232425 notify.go:220] Checking for updates...
	I0414 15:14:53.264052 1232425 config.go:182] Loaded profile config "multinode-906041": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 15:14:53.264078 1232425 status.go:174] checking status of multinode-906041 ...
	I0414 15:14:53.264664 1232425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 15:14:53.264734 1232425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:14:53.280057 1232425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I0414 15:14:53.280591 1232425 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:14:53.281196 1232425 main.go:141] libmachine: Using API Version  1
	I0414 15:14:53.281248 1232425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:14:53.281649 1232425 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:14:53.281849 1232425 main.go:141] libmachine: (multinode-906041) Calling .GetState
	I0414 15:14:53.283371 1232425 status.go:371] multinode-906041 host status = "Stopped" (err=<nil>)
	I0414 15:14:53.283388 1232425 status.go:384] host is not running, skipping remaining checks
	I0414 15:14:53.283395 1232425 status.go:176] multinode-906041 status: &{Name:multinode-906041 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 15:14:53.283438 1232425 status.go:174] checking status of multinode-906041-m02 ...
	I0414 15:14:53.283723 1232425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0414 15:14:53.283762 1232425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:14:53.299041 1232425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0414 15:14:53.299513 1232425 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:14:53.299940 1232425 main.go:141] libmachine: Using API Version  1
	I0414 15:14:53.299966 1232425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:14:53.300459 1232425 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:14:53.300642 1232425 main.go:141] libmachine: (multinode-906041-m02) Calling .GetState
	I0414 15:14:53.302434 1232425 status.go:371] multinode-906041-m02 host status = "Stopped" (err=<nil>)
	I0414 15:14:53.302448 1232425 status.go:384] host is not running, skipping remaining checks
	I0414 15:14:53.302453 1232425 status.go:176] multinode-906041-m02 status: &{Name:multinode-906041-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (135.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-906041 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0414 15:15:41.751764 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:15:58.678926 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-906041 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m15.369830473s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-906041 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (135.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-906041
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-906041-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-906041-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (66.685719ms)

                                                
                                                
-- stdout --
	* [multinode-906041-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-906041-m02' is duplicated with machine name 'multinode-906041-m02' in profile 'multinode-906041'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-906041-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-906041-m03 --driver=kvm2  --container-runtime=containerd: (45.26385774s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-906041
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-906041: exit status 80 (220.523633ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-906041 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-906041-m03 already exists in multinode-906041-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-906041-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.40s)

                                                
                                    
x
+
TestPreload (227.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-538128 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0414 15:17:59.574892 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-538128 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m17.787944122s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-538128 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-538128 image pull gcr.io/k8s-minikube/busybox: (2.67351202s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-538128
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-538128: (1m30.981735948s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-538128 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0414 15:20:58.687405 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-538128 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (55.270838899s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-538128 image list
helpers_test.go:175: Cleaning up "test-preload-538128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-538128
--- PASS: TestPreload (227.85s)

                                                
                                    
x
+
TestScheduledStopUnix (116.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-060144 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-060144 --memory=2048 --driver=kvm2  --container-runtime=containerd: (45.20068266s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-060144 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-060144 -n scheduled-stop-060144
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-060144 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0414 15:22:30.808586 1203639 retry.go:31] will retry after 86.626µs: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.809788 1203639 retry.go:31] will retry after 187.641µs: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.810922 1203639 retry.go:31] will retry after 336.895µs: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.812076 1203639 retry.go:31] will retry after 432.372µs: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.813237 1203639 retry.go:31] will retry after 437.209µs: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.814364 1203639 retry.go:31] will retry after 898.228µs: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.815482 1203639 retry.go:31] will retry after 692.16µs: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.816625 1203639 retry.go:31] will retry after 1.283186ms: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.818833 1203639 retry.go:31] will retry after 2.642797ms: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.822048 1203639 retry.go:31] will retry after 2.528216ms: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.825244 1203639 retry.go:31] will retry after 3.624287ms: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.829489 1203639 retry.go:31] will retry after 7.939625ms: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.837735 1203639 retry.go:31] will retry after 8.315976ms: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.846998 1203639 retry.go:31] will retry after 17.169503ms: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.865281 1203639 retry.go:31] will retry after 15.267191ms: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
I0414 15:22:30.881504 1203639 retry.go:31] will retry after 48.098133ms: open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/scheduled-stop-060144/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-060144 --cancel-scheduled
E0414 15:22:42.651736 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-060144 -n scheduled-stop-060144
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-060144
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-060144 --schedule 15s
E0414 15:22:59.575468 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-060144
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-060144: exit status 7 (70.125327ms)

                                                
                                                
-- stdout --
	scheduled-stop-060144
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-060144 -n scheduled-stop-060144
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-060144 -n scheduled-stop-060144: exit status 7 (70.38701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-060144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-060144
--- PASS: TestScheduledStopUnix (116.86s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (160.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4246496541 start -p running-upgrade-442332 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4246496541 start -p running-upgrade-442332 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m13.599484446s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-442332 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-442332 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m23.598767304s)
helpers_test.go:175: Cleaning up "running-upgrade-442332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-442332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-442332: (1.270552574s)
--- PASS: TestRunningBinaryUpgrade (160.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (173.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497019 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-497019 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m21.756271406s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-497019
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-497019: (1.517999095s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-497019 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-497019 status --format={{.Host}}: exit status 7 (83.885657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497019 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0414 15:27:59.575534 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-497019 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (50.724260719s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-497019 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497019 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-497019 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (88.270418ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-497019] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-497019
	    minikube start -p kubernetes-upgrade-497019 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4970192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-497019 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497019 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-497019 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (37.397541433s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-497019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-497019
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-497019: (1.399296853s)
--- PASS: TestKubernetesUpgrade (173.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-456285 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-456285 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (84.952115ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-456285] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-456285 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-456285 --driver=kvm2  --container-runtime=containerd: (1m32.783437745s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-456285 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-378177 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-378177 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (107.196906ms)

                                                
                                                
-- stdout --
	* [false-378177] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:23:45.191779 1237382 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:23:45.192048 1237382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:23:45.192060 1237382 out.go:358] Setting ErrFile to fd 2...
	I0414 15:23:45.192066 1237382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:23:45.192286 1237382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1196368/.minikube/bin
	I0414 15:23:45.192946 1237382 out.go:352] Setting JSON to false
	I0414 15:23:45.195545 1237382 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":25568,"bootTime":1744618657,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:23:45.195627 1237382 start.go:139] virtualization: kvm guest
	I0414 15:23:45.197572 1237382 out.go:177] * [false-378177] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:23:45.198778 1237382 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:23:45.198787 1237382 notify.go:220] Checking for updates...
	I0414 15:23:45.201095 1237382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:23:45.202167 1237382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1196368/kubeconfig
	I0414 15:23:45.203380 1237382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1196368/.minikube
	I0414 15:23:45.204389 1237382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:23:45.205442 1237382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:23:45.207052 1237382 config.go:182] Loaded profile config "NoKubernetes-456285": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 15:23:45.207170 1237382 config.go:182] Loaded profile config "force-systemd-env-620360": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 15:23:45.207294 1237382 config.go:182] Loaded profile config "offline-containerd-418592": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0414 15:23:45.207414 1237382 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:23:45.245702 1237382 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 15:23:45.246811 1237382 start.go:297] selected driver: kvm2
	I0414 15:23:45.246831 1237382 start.go:901] validating driver "kvm2" against <nil>
	I0414 15:23:45.246843 1237382 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:23:45.248916 1237382 out.go:201] 
	W0414 15:23:45.250055 1237382 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0414 15:23:45.251084 1237382 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-378177 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-378177" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-378177

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378177"

                                                
                                                
----------------------- debugLogs end: false-378177 [took: 2.974830373s] --------------------------------
helpers_test.go:175: Cleaning up "false-378177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-378177
--- PASS: TestNetworkPlugins/group/false (3.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (78.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-456285 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-456285 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m16.649160109s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-456285 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-456285 status -o json: exit status 2 (244.692303ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-456285","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-456285
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-456285: (1.689662113s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (78.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (72.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-456285 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-456285 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m12.754037724s)
--- PASS: TestNoKubernetes/serial/Start (72.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-456285 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-456285 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.240681ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.123454108s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-456285
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-456285: (1.487460077s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-456285 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-456285 --driver=kvm2  --container-runtime=containerd: (22.184155854s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-456285 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-456285 "sudo systemctl is-active --quiet service kubelet": exit status 1 (255.765987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (166.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2858244950 start -p stopped-upgrade-822727 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2858244950 start -p stopped-upgrade-822727 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m9.042424453s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2858244950 -p stopped-upgrade-822727 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2858244950 -p stopped-upgrade-822727 stop: (2.330437297s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-822727 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-822727 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m34.768922163s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (166.17s)

                                                
                                    
x
+
TestPause/serial/Start (66.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-731640 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-731640 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m6.879792504s)
--- PASS: TestPause/serial/Start (66.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (62.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m2.117393572s)
--- PASS: TestNetworkPlugins/group/auto/Start (62.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m33.285526382s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (85.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-731640 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-731640 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m25.069335344s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (85.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-378177 "pgrep -a kubelet"
I0414 15:30:28.629194 1203639 config.go:182] Loaded profile config "auto-378177": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-378177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-rxjxm" [58eaa23b-d667-49b1-b327-7a35d0bfdb63] Pending
helpers_test.go:344: "netcat-5d86dc444-rxjxm" [58eaa23b-d667-49b1-b327-7a35d0bfdb63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-rxjxm" [58eaa23b-d667-49b1-b327-7a35d0bfdb63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006897782s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-378177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0414 15:30:58.678903 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m24.889649417s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rh75w" [51e81419-c7fc-4846-b407-74e5644aea8d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00331852s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-822727
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (79.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m19.858130502s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (79.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-378177 "pgrep -a kubelet"
I0414 15:31:06.796355 1203639 config.go:182] Loaded profile config "kindnet-378177": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-378177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-sd9zv" [e85ad472-5cfb-4b52-afce-6dcfaf671654] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-sd9zv" [e85ad472-5cfb-4b52-afce-6dcfaf671654] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003817885s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-378177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-731640 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-731640 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-731640 --output=json --layout=cluster: exit status 2 (239.78785ms)

                                                
                                                
-- stdout --
	{"Name":"pause-731640","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-731640","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-731640 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-731640 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.07s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-731640 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-731640 --alsologtostderr -v=5: (1.06877281s)
--- PASS: TestPause/serial/DeletePaused (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m25.155958096s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (114.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0414 15:32:21.753418 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m54.288309867s)
--- PASS: TestNetworkPlugins/group/flannel/Start (114.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-trt6h" [4565625e-805b-4864-b124-20a8b2790d74] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004326623s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-378177 "pgrep -a kubelet"
I0414 15:32:24.324360 1203639 config.go:182] Loaded profile config "custom-flannel-378177": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-378177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bv4t9" [8fb56844-913d-472a-bf6a-f0be3b3a3b27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bv4t9" [8fb56844-913d-472a-bf6a-f0be3b3a3b27] Running
I0414 15:32:28.673154 1203639 config.go:182] Loaded profile config "calico-378177": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004201348s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-378177 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-378177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-stwsf" [d49b689f-d5a3-40b3-904e-eb2f65930bc3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-stwsf" [d49b689f-d5a3-40b3-904e-eb2f65930bc3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005086243s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-378177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-378177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-378177 "pgrep -a kubelet"
I0414 15:32:49.358679 1203639 config.go:182] Loaded profile config "enable-default-cni-378177": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-378177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tb5vm" [084dcc17-57b2-47a6-9e65-5b9c001891e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-tb5vm" [084dcc17-57b2-47a6-9e65-5b9c001891e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004062667s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-378177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m2.869769314s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (188.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-012725 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0414 15:32:59.575469 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-012725 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m8.907166324s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (188.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-378177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (104.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-417214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-417214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m44.985749201s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (104.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qqr62" [2f7a7a97-2401-4ba0-b6d7-70953566e07e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003706046s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-378177 "pgrep -a kubelet"
I0414 15:33:34.418562 1203639 config.go:182] Loaded profile config "flannel-378177": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-378177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-57t2g" [3bc0a72b-3ac1-4bfb-b13e-d06a28459a72] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-57t2g" [3bc0a72b-3ac1-4bfb-b13e-d06a28459a72] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00349372s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-378177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-378177 "pgrep -a kubelet"
I0414 15:33:55.785541 1203639 config.go:182] Loaded profile config "bridge-378177": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-378177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dtrc7" [b739be59-50d5-4423-af3d-3b4ab1de085d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dtrc7" [b739be59-50d5-4423-af3d-3b4ab1de085d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003954835s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-782411 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-782411 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m9.054577367s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-378177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-378177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-063129 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-063129 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m12.225414099s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-417214 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6b37c3c4-d5a4-43ee-827d-daf89d567d5c] Pending
helpers_test.go:344: "busybox" [6b37c3c4-d5a4-43ee-827d-daf89d567d5c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6b37c3c4-d5a4-43ee-827d-daf89d567d5c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005026439s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-417214 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-417214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-417214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.274240225s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-417214 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-417214 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-417214 --alsologtostderr -v=3: (1m30.824620345s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-782411 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e7704e1b-bfec-4f3c-81a2-8067197c354d] Pending
helpers_test.go:344: "busybox" [e7704e1b-bfec-4f3c-81a2-8067197c354d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e7704e1b-bfec-4f3c-81a2-8067197c354d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003095614s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-782411 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-782411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-782411 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-782411 --alsologtostderr -v=3
E0414 15:35:28.868857 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:28.875383 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:28.886797 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:28.908236 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:28.949881 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:29.031396 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:29.192978 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:29.514325 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:30.156497 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:31.438343 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:34.000303 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-782411 --alsologtostderr -v=3: (1m31.315004892s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-063129 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [55aab4cf-f412-436a-b753-fb428b52366f] Pending
helpers_test.go:344: "busybox" [55aab4cf-f412-436a-b753-fb428b52366f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0414 15:35:39.122298 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [55aab4cf-f412-436a-b753-fb428b52366f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003560752s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-063129 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-063129 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-063129 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-063129 --alsologtostderr -v=3
E0414 15:35:49.364318 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:35:58.678916 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:00.587793 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:00.594246 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:00.606341 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:00.627765 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:00.669245 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:00.750790 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:00.912401 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:01.234203 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:01.875955 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:03.157887 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:05.720112 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-063129 --alsologtostderr -v=3: (1m31.181691692s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-012725 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d10bdc3-81cb-4de6-9cfc-e58a7068eee5] Pending
helpers_test.go:344: "busybox" [2d10bdc3-81cb-4de6-9cfc-e58a7068eee5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0414 15:36:09.845757 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:10.842001 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [2d10bdc3-81cb-4de6-9cfc-e58a7068eee5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003918791s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-012725 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-012725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-012725 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-012725 --alsologtostderr -v=3
E0414 15:36:21.083542 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:36:41.565705 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-012725 --alsologtostderr -v=3: (1m31.504164214s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-417214 -n no-preload-417214
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-417214 -n no-preload-417214: exit status 7 (64.998784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-417214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (319.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-417214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
E0414 15:36:50.808104 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-417214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (5m19.111582262s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-417214 -n no-preload-417214
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (319.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782411 -n embed-certs-782411
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782411 -n embed-certs-782411: exit status 7 (67.844543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-782411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (318.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-782411 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-782411 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (5m18.031225068s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782411 -n embed-certs-782411
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (318.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-063129 -n default-k8s-diff-port-063129
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-063129 -n default-k8s-diff-port-063129: exit status 7 (70.589725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-063129 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (305.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-063129 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
E0414 15:37:22.421422 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:22.427835 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:22.439280 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:22.460679 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:22.502696 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:22.527398 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:22.584832 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:22.746586 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:23.067900 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:23.710107 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:24.562492 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:24.568927 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:24.580516 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:24.602081 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:24.644163 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:24.725749 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:24.888113 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:24.992362 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:25.209820 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:25.852202 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:27.134185 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:27.554249 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:29.695921 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:32.676541 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:34.817380 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:42.918849 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:45.058819 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:49.638792 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:49.645245 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:49.656675 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:49.678219 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:49.719778 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:49.801308 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:49.963182 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-063129 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (5m5.1794936s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-063129 -n default-k8s-diff-port-063129
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (305.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012725 -n old-k8s-version-012725
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012725 -n old-k8s-version-012725: exit status 7 (80.981078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-012725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0414 15:37:50.284608 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (161.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-012725 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0414 15:37:50.926961 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:52.208798 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:54.770855 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:59.575433 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:37:59.892477 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:03.401080 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:05.540547 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:10.134560 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:12.730216 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:28.204060 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:28.210542 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:28.222012 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:28.243466 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:28.284951 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:28.367189 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:28.529035 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:28.851236 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:29.492950 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:30.616698 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:30.775332 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:33.337092 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:38.459390 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:44.363412 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:44.448964 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:46.502773 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:48.701022 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:56.099628 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:56.106064 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:56.117447 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:56.138868 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:56.180305 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:56.262007 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:56.423568 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:56.745319 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:57.387338 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:38:58.668920 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:01.230719 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:06.352365 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:09.182908 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:11.578554 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:16.594118 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:22.653573 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/functional-905978/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:37.076396 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:50.145252 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:40:06.285244 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:40:08.424180 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:40:18.038623 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:40:28.868068 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-012725 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m41.683381588s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-012725 -n old-k8s-version-012725
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (161.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mjsz5" [fdda271c-4402-4847-8920-5325993f3515] Running
E0414 15:40:33.500567 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/enable-default-cni-378177/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004119807s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mjsz5" [fdda271c-4402-4847-8920-5325993f3515] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003285588s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-012725 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-012725 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-012725 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012725 -n old-k8s-version-012725
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012725 -n old-k8s-version-012725: exit status 2 (248.388247ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-012725 -n old-k8s-version-012725
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-012725 -n old-k8s-version-012725: exit status 2 (234.291933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-012725 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-012725 -n old-k8s-version-012725
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-012725 -n old-k8s-version-012725
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-110879 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
E0414 15:40:56.572102 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/auto-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:40:58.679180 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/addons-537199/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:00.588534 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:08.561246 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:08.567655 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:08.578894 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:08.600567 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:08.641840 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:08.725006 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:08.886617 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:09.208636 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:09.850511 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:11.132204 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:12.067175 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:13.693924 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:18.815541 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:28.290926 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/kindnet-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:29.057487 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-110879 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (45.329980137s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-110879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-110879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.195229343s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-110879 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-110879 --alsologtostderr -v=3: (2.32232238s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-110879 -n newest-cni-110879
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-110879 -n newest-cni-110879: exit status 7 (74.477626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-110879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-110879 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
E0414 15:41:39.960784 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/bridge-378177/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:41:49.539835 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-110879 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (37.079799258s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-110879 -n newest-cni-110879
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-vcjhg" [6d5c8526-5247-453c-9989-8350413445e2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00439306s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-vcjhg" [6d5c8526-5247-453c-9989-8350413445e2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005011442s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-417214 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-417214 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-110879 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-417214 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-417214 -n no-preload-417214
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-417214 -n no-preload-417214: exit status 2 (289.568032ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-417214 -n no-preload-417214
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-417214 -n no-preload-417214: exit status 2 (314.535564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-417214 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-417214 -n no-preload-417214
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-417214 -n no-preload-417214
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pbl2p" [70d0040f-c7d7-4b7e-a7a8-9feae0c35cac] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004093161s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-110879 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-110879 -n newest-cni-110879
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-110879 -n newest-cni-110879: exit status 2 (311.33288ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-110879 -n newest-cni-110879
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-110879 -n newest-cni-110879: exit status 2 (316.795312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-110879 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-110879 -n newest-cni-110879
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-110879 -n newest-cni-110879
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pbl2p" [70d0040f-c7d7-4b7e-a7a8-9feae0c35cac] Running
E0414 15:42:22.420846 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/calico-378177/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00365837s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-782411 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8pkts" [96951009-0346-445f-9d20-fb3da7b932f5] Running
E0414 15:42:24.562484 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/custom-flannel-378177/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003866947s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-782411 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-782411 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-782411 -n embed-certs-782411
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-782411 -n embed-certs-782411: exit status 2 (233.304374ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-782411 -n embed-certs-782411
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-782411 -n embed-certs-782411: exit status 2 (234.810631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-782411 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-782411 -n embed-certs-782411
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-782411 -n embed-certs-782411
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8pkts" [96951009-0346-445f-9d20-fb3da7b932f5] Running
E0414 15:42:30.501493 1203639 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1196368/.minikube/profiles/old-k8s-version-012725/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004148386s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-063129 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-063129 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-063129 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-063129 -n default-k8s-diff-port-063129
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-063129 -n default-k8s-diff-port-063129: exit status 2 (242.489912ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-063129 -n default-k8s-diff-port-063129
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-063129 -n default-k8s-diff-port-063129: exit status 2 (236.576447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-063129 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-063129 -n default-k8s-diff-port-063129
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-063129 -n default-k8s-diff-port-063129
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                    

Test skip (39/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
254 TestNetworkPlugins/group/kubenet 3.04
263 TestNetworkPlugins/group/cilium 3.3
278 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-378177 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-378177" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-378177

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378177"

                                                
                                                
----------------------- debugLogs end: kubenet-378177 [took: 2.88304224s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-378177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-378177
--- SKIP: TestNetworkPlugins/group/kubenet (3.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-378177 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-378177" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-378177

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-378177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378177"

                                                
                                                
----------------------- debugLogs end: cilium-378177 [took: 3.160160657s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-378177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-378177
--- SKIP: TestNetworkPlugins/group/cilium (3.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-130916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-130916
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard